Age | Commit message (Collapse) | Author |
|
Validate the input with timespecfix before truncating to a timeval.
timespecfix does not round, so we need to to it by hand after validation.
FreeBSD and NetBSD check the input with this range, we ought to as well.
Also add a regression test for this case.
ok tb@
|
|
Instead of converting timespec -> timeval and truncating the input,
check with timespecfix and use tstohz(9) for the tsleep.
All other contemporary systems check this correctly.
Also add a regression test for this case.
ok tb@
|
|
ok bluhm@, visa@
|
|
|
|
OK mpi@
|
|
lock order checking is disabled but it can be enabled at runtime.
Suggested by deraadt@ / mpi@
OK mpi@
|
|
system time.
Introduce a new CP_SPIN "scheduler state" and modify userland tools
to display the % of timer a CPU spents spinning.
Based on a diff from jmatthew@, ok pirofti@, bluhm@, visa@, deraadt@
|
|
to simplify the code.
OK mpi@
|
|
soreaper() that is scheduled onto the timer thread. soput() is
scheduled from there onto the sosplice task thread. After that it
is save to pool_put() the socket and splicing data structures.
OK mpi@ visa@
|
|
This gives use refcounting for free which is what we need for MP.
ok bluhm@, visa@
|
|
The loop variable mp is protected by vfs_busy() so that it cannot
be unmounted. But the next mount point nmp could be unmounted while
VFS_SYNC() sleeps. As the loop in vfs_stall() does not destroy the
mount point, TAILQ_FOREACH_REVERSE without _SAVE is the correct
macro to use.
OK deraadt@ visa@
|
|
later.
ok bluhm@, visa@
|
|
change and this has nothing to do with it.
ok visa@, bluhm@
|
|
the other fields.
Once we no longer have any [k] (kernel lock) protections, we'll be
able to unlock almost all network related syscalls.
Inputs from and ok bluhm@, visa@
|
|
|
|
and indicate if a saved stack trace is empty.
OK guenther@
|
|
...and release it in sounlock(). This will allows us to progressively
remove the KERNEL_LOCK() in syscalls.
ok visa@ some time ago
|
|
OK mpi@
|
|
unnecessary because curproc always does the locking.
OK mpi@
|
|
if the lock becomes watched later.
|
|
|
|
the given number of elements already is a power of 2.
ok visa@, "seems like a good plan" deraadt@
|
|
this gets rid of the source annotation which doesn't really add
anything other than adding complexitiy. randomess is generally
good enough that the few extra bits that the source type would
add are not worth it.
ok mikeb@ deraadt@
|
|
error is set by copyinstr(9) only and we return early if it is non-zero,
so the loop's last condition is always true.
OK deraadt, jca
|
|
curproc that does the locking or unlocking, so the proc parameter
is pointless and can be dropped.
OK mpi@, deraadt@
|
|
ok visa@
|
|
We do not need the lock there.
Missed this in my former commit pushing NET_LOCK() down the stack.
Found the hard way by naddy@, sorry!
OK mpi@.
|
|
kernels.
While here sync all MP_LOCKDEBUG/while loops.
ok mlarkin@, visa@
|
|
With and ok visa@
|
|
ok deraadt@
|
|
This turns `filehead' into a local variable, that will make it easier
to protect it.
ok visa@
|
|
Prodded by and ok mpi@
|
|
Discussing with mpi@ and guenther@, we decided to first fix the existing
semaphore implementation with regards to SA_RESTART and POSIX compliant
returns in the case where we deal with restartable signals.
Currently we return EINTR everywhere which is mostly incorrect as the
user can not know if she needs to recall the syscall or not. Return
ECANCELED to signal that SA_RESTART was set and EINTR otherwise.
Regression tests pass and so does the posixsuite. Timespec validation
bits are needed to pass the later.
OK mpi@, guenther@
|
|
ok visa@
|
|
ok millert@, visa@
|
|
syscall) confirm the stack register points at MAP_STACK memory, otherwise
SIGSEGV is delivered. sigaltstack() and pthread_attr_setstack() are modified
to create a MAP_STACK sub-region which satisfies alignment requirements.
Observe that MAP_STACK can only be set/cleared by mmap(), which zeroes the
contents of the region -- there is no mprotect() equivalent operation, so
there is no MAP_STACK-adding gadget.
This opportunistic software-emulation of a stack protection bit makes
stack-pivot operations during ROPchain fragile (kind of like removing a
tool from the toolbox).
original discussion with tedu, uvm work by stefan, testing by mortimer
ok kettenis
|
|
by my mistake.
Pointed out by Christian Ludwig. Thank you!
|
|
calling FRELE(9) in finishdup().
Update comments accordingly.
ok bluhm@, visa@
|
|
dupfdopen().
ok bluhm@, visa@
|
|
set for pledged processes. dup(2) uses the flag from the old file
descriptor. Make open /dev/fd consistent to duplicate and inherit
the flag.
OK deraadt@
|
|
Prevents a lock ordering issue between SCHED_LOCK() and printf(9)'s
mutex. While here protect all kprintf() calls ending on the console
with the mutex.
ok kettenis@, visa@
|
|
ok millert@, deraadt@, florian@
|
|
|
|
in namei(9).
So we're sure the 'struct file *' won't disapear behind our back when we
go parrallel.
ok visa@, bluhm@
|
|
While here call FREF() right after fd_getfile().
ok bluhm@, visa@
|
|
ok visa@, bluhm@
|
|
AF_UNIX is both the historical _and_ standard name, so prefer and recommend
it in the headers, manpages, and kernel.
ok miller@ deraadt@ schwarze@
|
|
When an event that was disabled by EV_DISABLE or EV_DISPATCH is registered
again, an associated filter must be ran to mark it active if a preexisting
condition is present.
The issue was reported and fix tested by Lukas Larsson <lukas at erlang.org>,
thanks!
ok mpi
|
|
Nothing uses this fd-tracking part of pledge yet.
OK deraadt@
|
|
getvnode().
ok millert@
|