Age | Commit message (Collapse) | Author |
|
ok reyk@, rzalamena@
|
|
reduced buffer size. If the send buffer size is less than the size
of a single mbuf, it will never fit. So if the send buffer is
empty, split the large mbuf and move only a part.
OK claudio@
|
|
This will allow us to keep locking simple as soon as we trade
splsoftnet() for a rwlock.
ok bluhm@
|
|
at IPL_SOFTNET.
This will allow us to keep locking simple as soon as we trade
splsoftnet() for a rwlock.
ok bluhm@
|
|
This will allow us to keep locking simple as soon as we trade
splsoftnet() for a rwlock.
ok bluhm@
|
|
ok jsg@
|
|
This will allow us to keep locking simple as soon as we trade
splsoftnet() for a rwlock.
ok bluhm@, claudio@
|
|
to keep things concise i let the multi page allocators provide
multiple sizes of pages, but this feature was implicit inside
pool_init and only usable if the caller of pool_init did not specify
a page allocator.
callers of pool_init can now suplly a page allocator that provides
multiple page sizes. pool_init will try to fit 8 items onto a page
still, but will scale its page size down until it fits into what
the allocator provides.
supported page sizes are specified as a bit field in the pa_pagesz
member of a pool_allocator. setting the low bit in that word indicates
that the pages can be aligned to their size.
|
|
in process context. The read/write lock introduced in rev 1.64
would create lock ordering problems with the upcoming SOCKET_LOCK()
mechanism. The current tsleep() in sblock() must be replaced with
rwsleep(&socketlock) later. The sb_flags are protected by
KERNEL_LOCK(). They must not be accessed from interrupt context,
but nowadays softnet() is not an interrupt anyway.
OK mpi@
|
|
In order to stop abusing lo0 for all rdomains, a new loopback interface
will be created every time a rdomain is created. The unit number will
be the same as the rdomain, i.e. lo1 will be attached to rdomain 1.
If this loopback interface is already in use it wont be possible to create
the corresponding rdomain.
In order to know which lo(4) interface is attached to a rdomain, its index
is stored in the rtable/rdomain map.
This is a long overdue since the introduction of rtable/rdomain. It also
fixes a recent regression due to resetting the rdomain of an incoming
packet reported by semarie@, Andreas Bartelt and Nils Frohberg.
ok claudio@
|
|
splnet() was necessary when link state changes were executed from
hardware interrupt handlers, nowdays all the changes are serialized
by the KERNEL_LOCK() so assert that it is held instead.
ok mikeb@
|
|
From patrick keshishian
|
|
|
|
closef() on a socket will call soclose() which call splsoftnet(). So
make sure we release the IPL level first in error paths.
Found by Nils Frohberg while testing another diff.
ok mikeb@, bluhm@
|
|
Fix a typo introduced in m_pullup(9) refactoring and found the hard
way by semarie@ while testing another diff.
ok mikeb@, dlg@
|
|
pool_item_header is now pool_page_header. the more useful change
is pool_list is now pool_cache_item. that's what items going into
the per cpu pool caches are cast to, and they get linked together
to make a list.
the functions operating on what is now pool_cache_items have been
renamed to make it more obvious what they manipulate.
|
|
initial thread
ok jsing@ kettenis@
|
|
|
|
it copies the existing pool code, except it works on pool_list
structures instead of pool_item structures.
after this id like to poison the words used by the TAILQ_ENTRY in
the pool_list struct that arent used until a list of items is moved
into the global depot.
|
|
it makes it more readable, and fixes a bug in pool_list_put where it
was returning the next item in the current list rather than the next
list to be freed.
|
|
this is modelled on whats described in the "Magazines and Vmem:
Extending the Slab Allocator to Many CPUs and Arbitrary Resources"
paper by Jeff Bonwick and Jonathan Adams.
the main semantic borrowed from the paper is the use of two lists
of free pool items on each cpu, and only moving one of the lists
in and out of a global depot of free lists to mitigate against a
cpu thrashing against that global depot.
unlike slabs, pools do not maintain or cache constructed items,
which allows us to use the items themselves to build the free list
rather than having to allocate arrays to point at constructed pool
items.
the per cpu caches are build on top of the cpumem api.
this has been kicked a bit by hrvoje popovski and simon mages (thank you).
im putting it in now so it is easier to work on and test.
ok jmatthew@
|
|
no need to wait until the first program using it breaks...
"could make sense" semarie@ (and thanks for the cluestick)
OK deraadt@
|
|
ncpus is used on half the architectures to indicate the number of
cpus that have been hatched, and is used on them in things like ddb
to figure out how many cpus to shut down again.
ncpusfound is incremented during autoconf on MP machines to show
how big ncpus will probably become. percpu is initted after autoconf
but before cpus are hatched, so this works well.
|
|
the most important change is that if the requested data is already
in the first mbuf in the chain, return quickly.
if that isnt true, the code will try to use the first mbuf to fit
the requested data.
if that isnt true, it will prepend an mbuf, and maybe a cluster,
to fit the requested data.
m_pullup will now try to maintain the alignment of the original
payload, even when prepending a new mbuf for it.
ok mikeb@
|
|
a certain vendor likes to make chips that specify the rx buffer
sizes in kilobyte increments. unfortunately it places the ethernet
header on the start of the rx buffer, which means if you give it a
mcl2k cluster, the ethernet header will not be ETHER_ALIGNed cos
mcl2k clusters are always allocated on 2k boundarys (cos they pack
into pages well). that in turn means the ip header wont be aligned
correctly.
the current workaround on these chips has been to let non-strict
alignment archs just use the normal 2k cluster, but use whatever
cluster can fit 2k + 2 on strict archs. that turns out to be the
4k cluster, meaning we waste nearly 2k of space on every packet.
properly aligning the ethernet header and ip headers gives a
performance boost, even on non-strict archs.
|
|
cpumem_realloc and counters_realloc actually allocated new per cpu data
for new cpus, they didnt resize the existing allocation.
specifically, this renames cpumem_reallod to cpumem_malloc_ncpus, and
counters_realloc to counters_alloc_ncpus.
ok (and with some fixes by) bluhm@
|
|
each cpus counters still have to be protected by splnet, but this
is better thana single set of counters protected by a global mutex.
ok bluhm@
|
|
Unify these by placing #ifdef MULTIPROCESSOR inside the functions, then
collapse further to reduce _KERNEL blocks
ok dlg
|
|
|
|
|
|
and redirect inet6 sockets to the ::1 flavor of localhost.
|
|
ok jsing@ kettenis@
|
|
ok jsing@ kettensi@
|
|
ok kettenis@ jsing@
|
|
ok deraadt@
|
|
from markus@
|
|
both the cpumem and counters api simply allocates memory for each cpu in
the system that can be used for arbitrary per cpu data (via cpumem), or
a versioned set of counters per cpu (counters).
there is an alternate backend for uniprocessor systems that basically
turns the percpu data access into an immediate access to a single
allocation.
there is also support for percpu data structures that are available at
boot time by providing an allocation for the boot cpu. after autoconf,
these allocations have to be resized to provide for all cpus that were
enumerated by boot.
ok mpi@
|
|
Make process_auxv_offset() take and release a reference of the vmspace like
process_domem() does.
ok kettenis@
|
|
powerpc: rename second argument of pmap_proc_iflush() to match other archs
ok kettenis@
|
|
ispidtaken() can rely on pgfind() for all pgrp checks and can simply
use zombiefind() for the zombie check
ok jca@
|
|
no functional change
|
|
this is cheap since it is basic math. it also means that payloads
which have been aligned carefully will also be aligned in their
copy.
ok yasuoka@ claudio@
|
|
are for option PTRACE only
ok kettenis@
|
|
have an splsoftassert(IPL_SOFTNET) now, so sowakeup() does not need
to call splsoftnet() anymore.
From mpi@'s netlock diff; OK mikeb@
|
|
|
|
|
|
ok deraadt@
|
|
all dns socket connections will be redirected to localhost:port.
this could be a sockopt on the listening socket, but sysctl is
an easier interface to work with right now.
ok deraadt
|
|
splsoftnet() if the function does a splsoftassert(IPL_SOFTNET)
anyway.
|
|
From mpi@'s netlock diff; OK mikeb@
|