Age | Commit message (Collapse) | Author |
|
should help inspecting socket issues in the future.
enthusiasm from mpi@ bluhm@ deraadt@
|
|
multi page backend allocator implementation no longer needs to grab the
kernel lock.
ok mlarkin@, dlg@
|
|
* pool_allocator_single: single page allocator, always interrupt safe
* pool_allocator_multi: multi-page allocator, interrupt safe
* pool_allocator_multi_ni: multi-page allocator, not interrupt-safe
ok deraadt@, dlg@
|
|
isn't specified) the default backend allocator implementation no longer
needs to grab the kernel lock.
ok visa@, guenther@
|
|
in the (default) single page pool backend allocator. This means it is now
safe to call pool_get(9) and pool_put(9) for "small" items while holding
a mutex without holding the kernel lock as well as these functions will
no longer acquire the kernel lock under any circumstances. For "large" items
(where large is larger than 1/8th of a page) this still isn't safe though.
ok dlg@
|
|
functions. Note that these calls are deliberately not added to the
special-purpose back-end allocators in the various pmaps. Those allocators
either don't need to grab the kernel lock, are always called with the kernel
lock already held, or are only used on non-MULTIPROCESSOR platforms.
pk tedu@, deraadt@, dlg@
|
|
if we're allowed to try and use large pages, we try and fit at least
8 of the items. this amortises the per page cost of an item a bit.
"be careful" deraadt@
|
|
from martin natano
|
|
|
|
|
|
|
|
now that idle pool pages are timestamped we can tell how long theyve
been idle. this adds a task that runs every second that iterates
over all the pools looking for pages that have been idle for 8
seconds so it can free them.
this idea probably came from a conversation with tedu@ months ago.
ok tedu@ kettenis@
|
|
> if we're able to use large page allocators, try and place at least
> 8 items on a page. this reduces the number of allocator operations
> we have to do per item on large items.
this was backed out because of fallout on landisk which has since
been fixed. putting this in again early in the cycle so we can look
for more fallout. hopefully it will stick.
ok deraadt@
|
|
have any direct symbols used. Tested for indirect use by compiling
amd64/i386/sparc64 kernels.
ok tedu@ deraadt@
|
|
if you're having trouble understanding what this helps, imagine
your cpus caches are a hash table. by moving the base address of
items around (colouring them), you give it more bits to hash with.
in turn that makes it less likely that you will overflow buckets
in your hash. i mean cache.
it was inadvertantly removed in my churn of this subsystem, but as
tedu has said on this issue:
> The history of pool is filled with features getting trimmed because they
> seemed unnecessary or in the way, only to later discover how important they
> were. Having slowly learned that lesson, I think our default should be "if
> bonwick says do it, we do it" until proven otherwise.
until proven otherwise we can keep the functionality, especially
as the code cost is minimal.
ok many including tedu@ guenther@ deraadt@ millert@
|
|
the items address is within the page. it does that by masking the
item address with the page mask and comparing that to the page
address.
however, if we're using large pages with external page headers, we
dont request that the large page be aligned to its size. eg, on an
arch with 4k pages, an 8k large page could be aligned to 4k, so
masking bits to get the page address wont work.
these incorrect checks were distracting while i was debugging large
pages on landisk.
this changes it to do range checks to see if the item is within the
page. it also checks if the item is on the page before checking if
its magic values or poison is right.
ok miod@
|
|
|
|
an interrupt handler at an ipl level higher than what you're
splasserting you should be at. if you think code should be protected
by IPL_BIO and its entered from an interrupt handler established
at IPL_NET, you have a bug.
add some asserts to gets and puts so we can pick those cases up.
|
|
landisk machines. we've been unable to figure out due to a lack of
hardware (on my part) or time.
discussed with and ok miod@
|
|
pool_setlowat()
ok dlg@ tedu@
|
|
|
|
8 items on a page. this reduces the number of allocator operations
we have to do per item on large items.
ok tedu@
|
|
a second.
this basically brings back the functionality that was trimmed in r1.53,
except this version uses ticks instead of very slow hardware clock reads.
ok tedu@
|
|
latter is cheaper, but i forgot to change the thing that pulls pages off
those lists to match the change in direction. the page lists went from LIFO
to FIFO.
this changes pool_update_curpage to use TAILQ_LAST so we go back to LIFO.
pointed out by and ok tedu@
|
|
either end of the lists cheaply.
ok kettenis@ tedu@
|
|
pool hasnt had pool_setipl called.
ok kettenis@ ages ago
|
|
to include that than rdnvar.h. ok deraadt dlg
|
|
us handle the slowdown where we already give up pr_mtx and gets rid of
an ugly goto.
ok tedu@ who i think has more tweaks coming
|
|
has been added to the pool, else it doesn't help because the memory isn't
available. lost in locking rework.
tested blambert sthen
|
|
pool_setipl(9) has been called. This avoids the panic introduced in rev 1.139
(which was subsequently backed out) while still effectively guaranteeing a
consistent snapshot. Pools used from interrupt handlers should use the
appropriate pool IPL.
ok dlg@, deraadt@
|
|
|
|
i couldnt measure a significant performance difference with or
without it. this is likely a function of the memory involved being
close to bits that are already being touched, the implemention being
simple macros that mean registers can stay hot, and a lack of
conditionals that would cause a cpu pipeline to crash.
this means we're unconditionally poisoning the first two u_longs
of pool items on all kernels. i think it also makes the code easier
to read.
discussed with deraadt@
|
|
previously they were ints, but this bumps them to long sized words.
in the pool item headers they were followed by the XSIMPLEQ entries,
which are basically pointers which got long word alignment. this
meant there was a 4 byte gap on 64bit architectures between the
magic and list entry that wasnt being poisoned or checked.
this change also uses the header magic (which is sourced from
arc4random) with an xor of the item address to poison the item magic
value. this is inspired by tedu's XSIMPLEQ lists, and means we'll
be exercising memory with more bit patterns.
lastly, this takes more care around the handling of the pool_debug
flag. pool pages read it when theyre created and stash a local copy
of it. from then on all items returned to the page will be poisoned
based on the pages local copy of the flag. items allocated off the
page will be checked for valid poisoning only if both the page and
pool_debug flags are both set.
this avoids a race where pool_debug was not set when an item is
freed (so it wouldnt get poisoned), then gets set, then an item
gets allocated and fails the poison checks because pool_debug wasnt
set when it was freed.
|
|
ok dlg
|
|
in pool_setlowat.
this was stopping arm things from getting spare items into their
pmap entry pools, so things that really needed them in a delicate
part of boot were failing.
reported by rapha@
co-debugging with miod@
|
|
for subr_poison.c will not get compiled at all on !DIAGNOSTIC kernels.
Found the hard way by deraadt@
|
|
least). after this i am confident that pools are mpsafe, ie, can
be called without the kernel biglock being held.
the page allocation and setup code has been split into four parts:
pool_p_alloc is called without any locks held to ask the pool_allocator
backend to get a page and page header and set up the item list.
pool_p_insert is called with the pool lock held to insert the newly
minted page on the pools internal free page list and update its
internal accounting.
once the pool has finished with a page it calls the following:
pool_p_remove is called with the pool lock help to take the now
unnecessary page off the free page list and uncount it.
pool_p_free is called without the pool lock and does a bunch of
checks to verify that the items arent corrupted and have all been
returned to the page before giving it back to the pool_allocator
to be freed.
instead of pool_do_get doing all the work for pool_get, it is now
only responsible for doing a single item allocation. if for any
reason it cant get an item, it just returns NULL. pool_get is now
responsible for checking if the allocation is allowed (according
to hi watermarks etc), and for potentially sleeping waiting for
resources if required.
sleeping for resources is now built on top of pool_requests, which
are modelled on how the scsi midlayer schedules access to scsibus
resources.
the pool code now calls pool_allocator backends inside its own
calls to KERNEL_LOCK and KERNEL_UNLOCK, so users of pools dont
have to hold biglock to call pool_get or pool_put.
tested by krw@ (who found a SMALL_KERNEL issue, thank you)
noone objected
|
|
of EINVAL like other sysctl things do.
|
|
some pool users (eg, mbufs and mbuf clusters) protect calls to pools
with their own locks that operate at high spl levels, rather than
pool_setipl() to have pools protect themselves.
this means pools mtx_enter doesnt necessarily prevent interrupts
that will use a pool, so we get code paths that try to mtx_enter
twice, which blows up.
reported by vlado at bsdbg dot net and matt bettinger
diagnosed by kettenis@
|
|
|
|
poked by kspillner@
ok miod@
|
|
ok mpi@ kspillner@
|
|
no functional change.
|
|
of pr_phoffset.
ok doug@ guenther@
|
|
|
|
this moves the size of the pool page (not arch page) out of the
pool allocator into struct pool. this lets us create only two pools
for the automatically determined large page allocations instead of
256 of them.
while here support using slack space in large pages for the
pool_item_header by requiring km_alloc provide pool page aligned
memory.
lastly, instead of doing incorrect math to figure how how many arch
pages to use for large pool pages, just use powers of two.
ok mikeb@
|
|
cut it out of the code to simplify things.
ok mikeb@
|
|
add an explicit rwlock around the global state (the pool list and serial
number) rather than rely on implicit process exclusion, splhigh and splvm.
the only things touching the global state come from process context so we
can get away with an rwlock instead of a mutex. thankfully.
ok matthew@
|
|
containing an item when its returned to the pool. this means you
need to do an inexact comparison between an items address and the
page address, cos a pool page can contain many items.
previously this used RB_FIND with a compare function that would do math
on every node comparison to see if one node (the key) was within the other
node (the tree element).
this cuts it over to using RB_NFIND to find the closest tree node
instead of the exact tree node. the node compares turns into simple
< and > operations, which inline very nicely with the RB_NFIND. the
constraint (an item must be within a page) is then checked only
once after the NFIND call.
feedback from matthew@ and tedu@
|
|
|