Age | Commit message (Collapse) | Author |
|
internals. this fixes a panic i got where a network interrupt tried to use
the mbuf pools mutex while pool_reclaim_all already held it which lead
to the same cpu trying to lock that mutex twice.
ok deraadt@
|
|
arent necessarily atomic.
this is an update of a diff matthew@ posted to tech@ over a year ago.
|
|
write to ph.
ok blambert@ matthew@ deraadt@
|
|
|
|
also, rmpage updates curpage, no need to do it twice.
ok art deraadt guenther
|
|
leaves an empty page in curpage, and this inconsistency slowly spreads
until finally one of the other pool checks freaks out.
ok art deraadt
|
|
and a pool_init flag to aggressively run pool_chk. ok art deraadt
|
|
The problems during the hackathon were not caused by this (most likely).
prodded by deraadt@ and beck@
|
|
and we aren't sure what's causing them.
shouted oks by many before I even built a kernel with the diff.
|
|
- Use km_alloc for all backend allocations in pools.
- Use km_alloc for the emergmency kentry allocations in uvm_mapent_alloc
- Garbage collect uvm_km_getpage, uvm_km_getpage_pla and uvm_km_putpage
ariane@ ok
|
|
to on, if POOL_DEBUG is compiled in, so that boot-time pool corruption
can be found. When the sysctl is turned off, performance is almost as
as good as compiling with POOL_DEBUG compiled out. Not all pool page
headers can be purged of the magic checks.
performance tests by henning
ok ariane kettenis mikeb
|
|
Allow reclaiming pages from all pools.
Allow zeroing all pages.
Allocate the more equal pig.
mlarking@ needs this.
Not called yet.
ok mlarkin@, theo@
|
|
ok claudio tedu
|
|
have real values, no 0 values anymore.
ok deraadt kettenis krw matthew oga thib
|
|
Currently only checks that we're not in an interrupt context, but will
soon check that we're not holding any mutexes either.
Update malloc(9) and pool(9) to use assertwaitok(9) as appropriate.
"i like it" art@, oga@, marco@; "i see no harm" deraadt@; too trivial
for me to bother prying actual oks from people.
|
|
This is more clear, and as thib pointed out, the default in softraid was
wrong. ok thib.
|
|
Use uvm_km_kmemalloc_pla with the dma constraint to allocate kernel stacks.
Yes, that means DMA is possible to kernel stacks, but only until we've fixed
all the scary drivers.
deraadt@ ok
|
|
ok tedu@, beck@, oga@
|
|
which contains the constraints for DMA/memory allocation for each
architecture, and dma_constraints which contains the range of addresses
that are dma accessable by the system.
This is based on ariane@'s physcontig diff, with lots of bugfixes and
additions the following additions by my self:
Introduce a new function pool_set_constraints() which sets the address
range for which we allocate pages for the pool from, this is now used
for the mbuf/mbuf cluster pools to keep them dma accessible.
The !direct archs no longer stuff pages into the kernel object in
uvm_km_getpage_pla but rather do a pmap_extract() in uvm_km_putpages.
Tested heavily by my self on i386, amd64 and sparc64. Some tests on
alpha and SGI.
"commit it" beck, art, oga, deraadt
"i like the diff" deraadt
|
|
|
|
holding a mutex which won't be released. From Christian Ehrhardt.
While here, fix another buglet: no need to pass down PR_ZERO either, as noticed by blambert@.
|
|
an RB tree, not into a hashtable.
|
|
around and add POOL_DEBUG as an enabled option, removing the define from subr_pool.c.
comments & ok deraadt@.
|
|
this.
ok beck@, dlg@
|
|
ok beck@, dlg@
|
|
|
|
|
|
*and* PR_ZERO in flags, you will no longer zero our your nicely
constructed object.
Instead, now if you have a contructor set, and you set PR_ZERO, you will
panic (it's invalid due to how constructor work).
ok miod@ deraadt@ on earlier versions of the diff. ok tedu@ after he
pointed out a couple of places I messed up.
Problem initally noticed by ariane@ a while ago.
|
|
|
|
|
|
slackers now get more bugs to fix, yay!
discussed with deraadt@.
|
|
is invoked with the pool mutex held, the asserts are satisfied by design.
ok tedu@
|
|
in pool_init so you the pool struct doesn't have to be zeroed before
you init it.
|
|
|
|
since it is essentially free. To turn on the checking of the rest of the
allocation, use 'option POOL_DEBUG'
ok tedu
|
|
between releases we may want to turn it on, since it has uncovered real
bugs)
ok miod henning etc etc
|
|
in fullpages that have been allocated.
spotted by claudio@
|
|
this can be used to walk over all the items allocated with a pool and have
them examined by a function the caller provides.
with help from and ok tedu@
|
|
on, aka, its coloring.
ok tedu@
|
|
works and there's even some sanity checks that it actually returns what we
expect it to return.
|
|
borked and instead of stressing to figure out how to fix it, I'll
let peoples kernels to work.
|
|
by otto@
ok otto@
|
|
This should make dlg happy.
|
|
This is solved by special allocators and an obfuscated compare function
for the page header splay tree and some other minor adjustments.
At this moment, the allocator will be picked automagically by pool_init
and you can get a kernel_map allocator if you specify PR_WAITOK in flags
(XXX), default is kmem_map. This will be changed in the future once the
allocator code is slightly reworked. But people want to use it now.
"nag nag nag nag" dlg@
|
|
is a lot slower. Before release this should be backed out, but for now
we need everyone to run with this and start finding the use-after-free
style bugs this exposes. original version from tedu
ok everyone in the room
|
|
|
|
|
|
|
|
add a new arg to the backend so it can tell pool to slow down. when we get
this flag, yield *after* putting the page in the pool's free list. whatever
we do, don't let the thread sleep.
this makes things better by still letting the thread run when a huge pf
request comes in, but without artificially increasing pressure on the backend
by eating pages without feeding them forward.
ok deraadt
|
|
Not sure what's more surprising: how long it took for NetBSD to
catch up to the rest of the BSDs (including UCB), or the amount of
code that NetBSD has claimed for itself without attributing to the
actual authors.
OK deraadt@
|