Age | Commit message (Collapse) | Author |
|
|
|
pools, sized by powers of 2, which are constrained to dma memory.
ok matthew tedu thib
|
|
Now instead of the global object hashtable, we have a per object tree.
Testing shows no performance difference and a slight code shrink. OTOH when
locking is more fine grained this should be faster due to lock contention on
uvm.hashlock.
ok thib@, art@.
|
|
We still have no idea why this stops the crashes. but it does.
a machine forced to 64mb of ram cycled 10GB through swap with this diff
and is still running as I type this. Other tests by ariane@ and thib@
also seem to show that it's alright.
ok deraadt@, thib@, ariane@
|
|
separately).
a change at or just before the hackathon has either exposed or added a
very very nasty memory corruption bug that is giving us hell right now.
So in the interest of kernel stability these diffs are being backed out
until such a time as that corruption bug has been found and squashed,
then the ones that are proven good may slowly return.
a quick hitlist of the main commits this backs out:
mine:
uvm_objwire
the lock change in uvm_swap.c
using trees for uvm objects instead of the hash
removing the pgo_releasepg callback.
art@'s:
putting pmap_page_protect(VM_PROT_NONE) in uvm_pagedeactivate() since
all callers called that just prior anyway.
ok beck@, ariane@.
prompted by deraadt@.
|
|
global lock, switch the uvm object pages to being kept in a per-object
RB_TREE. Right now this is approximately the same speed, but cleaner.
When biglock usage is reduced this will improve concurrency due to lock
contention..
ok beck@ art@. Thanks to jasper for the speed testing.
|
|
two cases of pool_get() + memset(0) -> pool_get(,,,PR_ZERO)
1.5 cases of global variables are already zeroed, so don't zero them.
ok ariane@, comments on stuff i'd missed from blambert@ and cnst@.
|
|
K&R function declarations, so switch them all over to ansi-style, in
accordance with the prophesy.
"go for it" art@
|
|
pages.
"looks good/no problems with it" tedu@ miod@ art@
|
|
changes the pressure on the uvm system, uncovering several bugs. Some
of those bugs result in provable deadlocks. We'll have to reconsider
integrating this diff again after fixing those bugs.
ok art@
|
|
This will allow us to escape the limitations of kmem_map.
At this moment, the per-type limits are still enforced for all sizes,
but we might loosen that limit in the future after some thinking.
Original diff from Mickey in kernel/5761 , I massaged it a little to
obey the per-type limits.
miod@ ok
|
|
ckuethe@ for a while. Okay beck@, "it is good timing" deraadt@.
|
|
to be page aligned and can contain more "noise".
From mickey art@ ok
|
|
and make sure that nothing can ever be mapped at theses addresses.
Only i386 overrides the default for now.
From mickey@, ok art@ miod@
|
|
us did not see it or get a chance to test it before it was commited. It
broke cvs, in the ami driver, making it not succeed at seeing it's devices.
|
|
this results in lesse kva waste due to static preallocation of those
for every phys page and also every swap page.
tested by beck krw miod
|
|
an interrupt safe thread.
use this as the new backend for mbpool and mclpool, eliminating the mb_map.
introduce a sysctl kern.maxclusters which controls the limit of clusters
allocated.
testing by many people, works everywhere but m68k. ok deraadt@
this essentially deprecates the NMBCLUSTERS option, don't use it.
this should reduce pressure on the kmem_map and the uvm reserve of static
map entries.
|
|
The only thing left in vm/ are just dumb wrappers.
vm/vm.h includes uvm/uvm_extern.h
vm/pmap.h includes uvm/uvm_pmap.h
vm/vm_page.h includes uvm/uvm_page.h
|
|
|
|
|
|
Including support for zeroing pages in the idle loop (not enabled yet).
|
|
|
|
This is to match (make diffs smaller) the code in NetBSD.
new gcc inlines those functions, so this could also be a performance win.
|
|
|
|
Mostly cleanups, but also a few improvements to pagedaemon for better
handling of low memory and/or low swap conditions.
|
|
|
|
|