Age | Commit message (Collapse) | Author |
|
This is no function change since aobjs never actually hit this path. (also it is
my bug from a while ago).
ok ariane@
|
|
college uvm_pglist.c
uvm_pglistalloc and free are just thin wrappers around pmemrange these
days and don't really need their own file.
ok ariane@
|
|
The new world order of pmemrange makes this data completely redundant
(being dealt with by the pmemrange constraints instead). Remove all code
that messes with the freelist.
While touching every caller of uvm_page_physload() anyway, add the flags
argument to all callers (all but one is 0 and that one already used
PHYSLOAD_DEVICE) and remove the macro magic to allow callers to continue
without it.
Should shrink the code a bit, as well.
matthew@ pointed out some mistakes i'd made.
``freelist death, I like. Ok.' ariane@
`I agree with the general direction, go ahead and i'll fix any fallout
shortly'' miod@ (68k 88k and vax i could not check would build)
|
|
it belongs to a world order that isn't here anymore. More importantly it
has been unused for a fair while now.
ok thib@
|
|
At various times diffs have had debugging that checked that we don't
insert a page into the tree on top of an existing page, leaking that
page's references. Until the recent hackathon (and introduction if
uvm_pagealloc_multi) the bufcache for example did a rb tree look up on
insert to check (under #ifdef DEBUG || 1) so instead just check it on
pageinsert every time, since RB_INSERT returns any duplicates so this
check is pretty much free.
``emphatically yes'' beck@
|
|
ok henning@
|
|
With this change bufcachepercent will be the percentage of dma reachable
memory that the buffer cache will attempt to use.
ok deraadt@ thib@ oga@
|
|
Bob needs this.
ok art@ bob@ thib@
|
|
Bogus chunks pointed out by matthew@ and miod@. No cookies for
marco@ and jasper@.
ok deraadt@ miod@ matthew@ jasper@ macro@
|
|
which contains the constraints for DMA/memory allocation for each
architecture, and dma_constraints which contains the range of addresses
that are dma accessable by the system.
This is based on ariane@'s physcontig diff, with lots of bugfixes and
additions the following additions by my self:
Introduce a new function pool_set_constraints() which sets the address
range for which we allocate pages for the pool from, this is now used
for the mbuf/mbuf cluster pools to keep them dma accessible.
The !direct archs no longer stuff pages into the kernel object in
uvm_km_getpage_pla but rather do a pmap_extract() in uvm_km_putpages.
Tested heavily by my self on i386, amd64 and sparc64. Some tests on
alpha and SGI.
"commit it" beck, art, oga, deraadt
"i like the diff" deraadt
|
|
recommit pmemrange:
physmem allocator: change the view of free memory from single
free pages to free ranges. Classify memory based on region with
associated use-counter (which is used to construct a priority
list of where to allocate memory).
Based on code from tedu@, help from many.
Useable now that bugs have been found and fixed in most architecture's
pmap.c
ok by everyone who has done a pmap or uvm commit in the last year.
|
|
sysctl.h was reliant on this particular include, and many drivers included
sysctl.h unnecessarily. remove sysctl.h or add proc.h as needed.
ok deraadt
|
|
ok kettenis@ beck@ (tentatively) and ariane@. deraadt asked for it to be
commited now.
original commit message:
extend uvm_page_physload to have the ability to add "device" pages to
the system.
This is needed in the case where you need managed pages so you can
handle faulting and pmap_page_protect() on said pages when you manage
memory in such regions (i'm looking at you, graphics cards).
these pages are flagged PG_DEV, and shall never be on the freelists,
assert this. behaviour remains unchanged in the non-device case,
specifically for all archs currently in the tree we panic if called
after bootstrap.
ok art@ kettenis@, beck@
|
|
more correctly reflect the new state of the world - that is - how many pages
can be cheaply reclaimed - which now includes clean buffer cache pages.
This change fixes situations where people would be running with a large bufcachepercent, and still notice swapping without the buffer cache backing off.
ok oga@, testing by many on tech@ and others. Thanks.
|
|
the kernel to reuse freed pages as quickly as possible, and it has been
finding bugs (some of which we have already fixed)
ok kettenis
|
|
Now instead of the global object hashtable, we have a per object tree.
Testing shows no performance difference and a slight code shrink. OTOH when
locking is more fine grained this should be faster due to lock contention on
uvm.hashlock.
ok thib@, art@.
|
|
fixed, but now it is time for a little break from the chaos.
ok kettenis
|
|
cache locality and will pave the way for the new pmemrange allocator.
Based on hints from art@ and ariane@.
ok ariane@, deraadt@, oga@
|
|
This has has been tested very very thoroughly on all archs we have
excepting 88k and 68k. Please see cvs log for the individual commit
messages.
ok beck@, thib@
|
|
More backouts in line with previous ones, this appears to bring us back to a
stable condition.
A machine forced to 64mb of ram cycled 10GB through swap with this diff
and is still running as I type this. Other tests by ariane@ and thib@
also seem to show that it's alright.
ok deraadt@, thib@, ariane@
|
|
allocator).
"i can't see any obvious problems" oga
|
|
separately).
a change at or just before the hackathon has either exposed or added a
very very nasty memory corruption bug that is giving us hell right now.
So in the interest of kernel stability these diffs are being backed out
until such a time as that corruption bug has been found and squashed,
then the ones that are proven good may slowly return.
a quick hitlist of the main commits this backs out:
mine:
uvm_objwire
the lock change in uvm_swap.c
using trees for uvm objects instead of the hash
removing the pgo_releasepg callback.
art@'s:
putting pmap_page_protect(VM_PROT_NONE) in uvm_pagedeactivate() since
all callers called that just prior anyway.
ok beck@, ariane@.
prompted by deraadt@.
|
|
> extend uvm_page_physload to have the ability to add "device" pages to the
> system.
since it was overlayed over a system that we warned would go "in to be
tested, but may be pulled out". oga, you just made me spend 20 minutes
of time I should not have had to spend doing this.
|
|
system.
This is needed in the case where you need managed pages so you can
handle faulting and pmap_page_protect() on said pages when you manage
memory in such regions (i'm looking at you, graphics cards).
these pages are flagged PG_DEV, and shall never be on the freelists,
assert this. behaviour remains unchanged in the non-device case,
specifically for all archs currently in the tree we panic if called
after bootstrap.
ok art@, kettenis@, ariane@, beck@.
|
|
just move that into uvm_pagedeactivate.
oga@ ok
|
|
|
|
global lock, switch the uvm object pages to being kept in a per-object
RB_TREE. Right now this is approximately the same speed, but cleaner.
When biglock usage is reduced this will improve concurrency due to lock
contention..
ok beck@ art@. Thanks to jasper for the speed testing.
|
|
not encrypted.
|
|
pgo_releasepg() hook and just free the page the "normal" way in the one
place we'll ever see PG_RELEASED and should care (uvm_page_unbusy,
called in aiodoned).
ok art@, beck@, thib@
|
|
to free ranges.
Classify memory based on region with associated use-counter (which is used
to construct a priority list of where to allocate memory).
Based on code from tedu@, help from many.
Ok art@
|
|
it is also not part of an aobj.
Clear anon flags at pagefree: page is no longer part of an anon.
ok oga
|
|
Makes trace in ddb useful.
ok oga
|
|
sleep on them (and otherwise ignore them) sleep on the pointer to the
{aiodoned,pagedaemon}_proc members, and nuke the two extra words.
"no objections" art@, ok beck@.
|
|
nothing uses this code yet, but might as well do it the right way.
"if you can't live without commiting this." miod@
|
|
recursive in some cases (mostly involving swapping). A proper fix is in
the works, but this will unbreak kernels for now.
|
|
fraction of the wakeups and sleeps involved here actually grab that
lock. The remainder, on the other hand, always have the fpageq_lock
locked.
So, make this locking correct by switching the other users over to
fpageq_lock, too.
This would probably be better off being a semaphore, but for now at
least it's correct.
"ok, unless you want to implement semaphores" art@
|
|
Fix up the one case of lock recursion (which blatantly ignored the
comment right above it saying that we don't need to lock). The rest of
the lock usage has been checked and appears to be correct.
ok ariane@.
|
|
PHYS_TO_VM_PAGE inline again. This should stop function call overhead
killing the vax and other slow archs while keeping the benefit for the
faster platforms.
suggested by miod. ok miod@, toby@.
|
|
the simple lock with a real lock - a IPL_BIO mutex. While i'm here, make
the sleeping condition one hell of a lot simpler in the aio daemon.
some ideas from and ok art@.
|
|
into a IPL_VM blocking mutex, also slightly extend the locked area so
that it actually protects access to the page array (as the comment on
the lock declaration says it should).
ansify a few functions while i'm in the file.
"ok, even though you're sneaking in ansification in a diff. You dirty
you." art@
|
|
By pseudo-inline, I mean that if a certain macro was defined, they would
be inlined. However, no architecture defines that, and none has for a
very very long time. Therefore mainly this just makes the code a damned
sight easier to read. Some k&r -> ansi declarations while I'm in there.
"just commit it" art@. ok weingart@.
|
|
average arch port. They are also inline. This does not help, de-inline them.
shaves about 1k on i386 and amd64 bsd.mp. Probably similar amounts of
most architectures.
"no issue" beck@ "Nuke nuke nuke... make them functions" weingart@ "this
is good" art@
|
|
- Split up run queues so that every cpu has one.
- Make setrunqueue choose the cpu where we want to make this process
runnable (this should be refined and less brutal in the future).
- When choosing the cpu where we want to run, make some kind of educated
guess where it will be best to run (very naive right now).
Other:
- Set operations for sets of cpus.
- load average calculations per cpu.
- sched_is_idle() -> curcpu_is_idle()
tested, debugged and prodded by many@
|
|
1. When checking if the pagedaemon should be awakened and to see how
much work it should do, consider the buffer cache deficit
(how much pages the buffer cache can eat max vs. how much it has
now) as pages that are not free. They are actually still usable by
the allocator, but the presure on the pagedaemon is increased when
we starting to chew into the memory that the buffer cache wants to
use.
2. Remove the stupid 512kB limit of how much memory should be our
free target. That maybe made sense on 68k, but on modern systems
512k is just a joke. Keep it at 3% of physical memory just like
it was meant to be.
3. When doing allocations for the pagedaemon, always let it use the
reserve. the whole UVM_OBJ_IS_KERN_OBJECT is silly and doesn't
work in most cases anyway. We still don't have a reserve for
the pagedaemon in the km_page allocator, but this seems to help
enough. (yes, there are still bad cases in that code and the comment
is only half-true, the whole section needs a massage, but that will
happen later, this diff only touches pagedaemon parts)
Testing by many, prodded by theo.
|
|
|
|
|
|
and supposed to be only used from within ddb.
|
|
macros that just expand into the mutex functions
to keep the abstraction, do assorted cleanup.
ok miod@,art@
|
|
|
|
ckuethe@ for a while. Okay beck@, "it is good timing" deraadt@.
|