Age | Commit message (Collapse) | Author |
|
|
|
catch the libc major bump per request from deraadt@
Diff by reyk.
ok guenther@
|
|
race condition and prep for later support of pthread_condattr_setclock()
"get it in" deraadt@, tedu@, cheers by others
|
|
and struct timespec * argument. sigtimedwait is just a one line
wrapper after this.
"get it in" deraadt@, tedu@, cheers by others
|
|
kernel so that librthread can detect when a thread is completely
done with its stack without need a kqueue. The dying thread moves
itself to a GC list, other threads scan the GC list on pthread_create()
and pthread_join() and free the stack and handle once the thread's
thread id is zeroed.
"get it in" deraadt@, tedu@, cheers by others
|
|
from Dawe.
|
|
particular CPU such that it just sits and spins in the idle loop, effectively
halting that CPU.
ok deraadt@, miod@
|
|
device tree walker, and add config_suspend() as well.
ok mlarkin pirofti, discussion with kettenis
|
|
"looks right" deraadt@
|
|
Okay deraadt@, kettenis@, mlarkin@.
|
|
do that and, given the security issues it exacerbates, never will.
So document it and delete the disabled support.
ok deraadt@ tedu@
|
|
with a subtle change to make it more clear (and more cache friendly)
netbsd pr 42312, found by tlambert@apple.com
ok miod
|
|
needed so that the route and inp lookups done in TCP and UDP know where
to look. Additionally in_pcbnotifyall() and tcp_respond() got a rdomain
argument as well for similar reasons. With this tcp seems to be now
fully rdomain save and no longer leaks single packets into the main domain.
Looks good markus@, henning@
|
|
supported it doesn't do any harm), so put the KNOTE() in selwakeup() itself and
remove it from any occurences where both are used, except one for kqueue itself
and one in sys_pipe.c (where the selwakeup is under a PIPE_SEL flag).
Based on a diff from tedu.
ok deraadt
|
|
ok jsing@, miod@
|
|
miod@ deraadt@ ok.
|
|
KNOTE() a second time is not needed (and perhaps bad)
ok claudio millert
|
|
after the master side of the pty has finished reading) and in ttyflush().
ok tedu millert
|
|
they are signed int)
ok miod guenther
|
|
that the timeout doesn't happen if setitimer is called between the
profiling / virtual timer expires and the timeout is scheduled.
firefox triggered this Profiling timer expired problem when in
uthread execve signal was being delivered after timer has already
been disabled; as reported on ports@ recently.
special thanks to kettenis@, kurt@, guenther@.
agreed by kettenis@, tedu@. ok guenther@.
reminded & ok fgs@. tested by ian@.
|
|
the two ifs at the start of the function and both variables are only altered
under pt_softc_lock so cannot change between the checks.
ok guenther@
|
|
and the whole extent is used; the current code computations would wrap.
Found the hard way by jsg@, fix discuss with kettenis@, and you get a
regress test for free (which will spin if you compile it again an old
subr_extent.c)
|
|
This is needed for the addition of further suspend/resume actions.
Okay deraadt@, marco@.
|
|
sched_exit(). This means that cpu_exit() and whatever it does (for instance
calling free(), as well as the deadproc p_hash handling are now locked as well.
This may have been one of the causes of the reaper panics, especially with
rthread patches... which were terminating a lot of threads very quickly onto
the deadproc p_hash list.
ok kurt kettenis miod
|
|
bit faster, but come on, inlining is supposed to be reserved only
for things which *critically* need it.
ok millert
|
|
is specified.
ok miod@
|
|
with m_tag_copy_chain() failures.
Use m_defrag() to eliminate hand rolled defragging of mbufs and
some uses of M_DUP_PKTHDR().
Original diff from thib@, claudio@'s feedback integrated by me.
Tests kevlo@ claudio@, "reads ok" blambert@
ok thib@ claudio@, "m_defrag() bits ok" kettenis@
|
|
ok millert@ blambert@ otto@
|
|
an RB tree, not into a hashtable.
|
|
it. thib@ ok'd the idea and an earlier diff.
|
|
task and shove it on a list. allocations can fail, so if something
that wants to run a task later already has memory to handle the
workq task then let it provide it via a new function called
workq_queue_task.
ok kettenis@
|
|
around and add POOL_DEBUG as an enabled option, removing the define from subr_pool.c.
comments & ok deraadt@.
|
|
uvm_map_checkprot() call, if the memory we're about to return has just been
allocated with uvm_km_kmemalloc() instead of coming from the freelist.
No functional change but a very small speedup when the freelist for the given
bucket is empty.
|
|
input, in order to pick the appropriate malloc() bucket.
Replace it with an inline function in kern_malloc.c, which will either
do a tightest-but-slower loop (if option SMALL_KERNEL), or a geometric search
equivalent to what the macro does, but producing smaller code (especially on
platforms which can not load large constants in one instruction).
|
|
It currently doesn't compile and this is unlikely to change
as there are many alternatives now since we no longer live
in the early 1990s and Metricom went bankrupt some time ago.
ok many @
|
|
size of cache hashtable that has now been removed.
|
|
ok beck@ thib@
|
|
Bad blambert@, no biscuit.
|
|
ok art@
|
|
errnos. Note that the error strings are being ignored, since we long ago
decided to not spam the console, and there is no other nice way to use the
errors (without changing the ioctls to pass it back)
The errno is now useful, since we can pass b_error from failing IO up, and
the drive can decide how to use that
ok miod
|
|
prodded by and ok thib@
agreed by art@ and blambert@
|
|
this.
ok beck@, dlg@
|
|
struct to 0/NULL. no performance impact but way less error prone on
addition of new pkthdr field (as just ran into with a theo diff). ok theo
|
|
running out of mbufs for rx rings.
if the system low watermark is lower than a rx rings low watermark,
we'll never send a packet up the stack, we'll always recycle it.
found by thib@ on a bge
sadface
|
|
This eliminates the large single namecache hash table, and implements
the name cache as a global lru of entires, and a redblack tree in each
vnode. It makes cache_purge actually purge the namecache entries associated
with a vnode when a vnode is recycled (very important for later on actually being
able to resize the vnode pool)
This commit does #if 0 out a bunch of procmap code that was
already broken before this change, but needs to be redone completely.
Tested by many, including in thib's nfs test setup.
ok oga@,art@,thib@,miod@
|
|
to free some for use on the rx rings on network cards.
this modifies m_cluncount to advise callers when we're in such a
situation, and makes them responsible for freeing up the cluster
for allocation by MCLGETI later.
fixes an awesome lockup with sis(4) henning has been experiencing.
this is not the best fix, but it is better than the current situation.
yep deraadt@ tested by henning@
|
|
inline the loop in the one place it exists, and remove it from uvm
adjust a comment mentioning it accordingly
originally inspired by a diff fixing a comment from oga@
ok art@ beck@ miod@ oga@
|
|
ok dlg thib
|
|
to the per-ipf mbuf cluster reference counters
ok dlg claudio
|
|
just use strings and make things unique.
ok claudio@
|