Age | Commit message (Collapse) | Author |
|
ok claudio@
|
|
path changed in rev 1.206. At least acme-client(1) is not happy with
this change.
Reported by claudio. Tests and ok by bluhm.
|
|
The only reason to re-lock dying `so' is the lock order with vnode(9)
lock, thus `unp_gc_lock' rwlock(9) could be taken after solock().
ok bluhm
|
|
ok mglocker@
|
|
|
|
At sockets layer only mark buffers as SB_MTXLOCK. At PCB layer only
protect `so_rcv' with corresponding `sb_mtx' mutex(9).
SS_ISCONNECTED and SS_CANTRCVMORE bits are redundant for AF_ROUTE
sockets. Since SS_CANTRCVMORE modifications performed with both solock()
and `sb_mtx' held, the 'unlocked' SS_CANTRCVMORE check in
rtm_senddesync() is safe.
ok bluhm
|
|
Speeds up resuming from hibernate.
Testing florian@ stsp@
ok mlarkin@ stsp@
|
|
testing by florian@ mglocker@ mlarkin@
ok deraadt@ mglocker@ mlarkin@
|
|
entry in enum lock_class_index was removed in sys/_lock.h
You get fireworks if the lock_classes array and enum lock_class_index
get out of sync.
|
|
The SPL level is not tacked by the mutex and we no longer need to track
this in the callers.
OK miod@ mlarkin@ tb@ jca@
|
|
|
|
visibility with kernel printf(9) (thus, onto console and into dmesg) since
the start of development. I want to reduce the dmesg spam, and bring
this more into the attention of the user who ran the command, so let's
try using uprintf(9) which puts it onto the active foreground tty (yes,
there maybe cases where there is no tty, but that's ok. I'll admit
I've considered deleting the logging messages entirely)
tested in snaps for a week
|
|
Over the last weeks the last SCHED_LOCK recursion was removed so this
is now possible and will allow to split up the SCHED_LOCK in a upcoming
step.
Instead of implementing an MP and SP version of SCHED_LOCK this just
always uses the mutex implementation.
While this makes the local s argument unused (the spl is now tracked by
the mutex itself) it is still there to keep this diff minimal.
Tested by many.
OK jca@ mpi@
|
|
ok deraadt@, mlarkin@
|
|
i386 such that we can call the necessary hooks in the suspend/resume code
without adding #ifdefs. Tweak the arm64 implementation such that we can
call the hooks earlier as this is necessary to mask MSI and MSI-X
interrupts on arm64.
ok deraadt@, mlarkin@
|
|
There is no reason to keep the wait message in place since it will
never show up even in ddb show proc output.
OK jca@
|
|
use one of the gotos. In this case goto out with mask and prop set to 0.
OK jca@
|
|
list. setpriority() is trivial and probably faster than releasing and
relocking SCHED_LOCK().
OK jca@
|
|
This diff adjusts how single_thread_set() accounts the threads by using
ps_threadcnt as initial value and counting all threads out that are already
parked. In single_thread_check call exit1() before decreasing ps_singlecount
this is now done in exit1().
exit1() and thread_fork() ensure that ps_threadcnt is updated with the
pr->ps_mtx held and in exit1() also account for exiting threads since
exit1() can sleep.
OK mpi@
|
|
of cleaning it in fusefs_mount().
ok claudio
|
|
|
|
the latter supporting the ability to get timestamp resolution of
symlinks.
ok deraadt@ millert@
|
|
Unify behaviour to all sockets. Now sblock() should be always
taken before solock() in all involved paths as sosend(), soreceive(),
sorflush() and sosplice(). sblock() is fine-grained lock which
serializes socket send and receive routines on `so_rcv' or `so_snd'
buffers. There is no big problem to wait netlock while holding sblock().
This unification removes a lot of temporary "sb_flags & SB_MTXLOCK" code
from sockets layer. This unification makes straight "solock()" and
"sblock()" lock order, no more solock() -> sblock() -> sounlock() ->
solock() -> sbunlock() -> sounlock() chains in sosend() and soreceive()
paths. This unification brings witness(4) support for sblock(), include
NFS involved sockets, which is useful.
Since the witness(4) support was introduced to sblock() with this diff,
some new witness reports appeared.
bulk(1) tests by tb, ok bluhm
|
|
The simplest case. Nothing to change in sockets layer, only set
SB_MTXLOCK on socket buffers.
ok bluhm
|
|
|
|
while here, ensure all vop_remove field are set, and always call the function.
the change is very conservative: it only adds vnode ref drop/unlock where it was
absent because it should be unreachable (and if it wasn't, it should fix
things).
ok miod@
|
|
In order to help code maintenance, explicitly add all `struct vops` members with
the current value (if not present, it is NULL), still using the C99 notation.
ok miod@
|
|
|
|
Unlock sigsuspend() and __thrsigdivert() again.
|
|
Since we want to unlock sigsuspend, ptsignal needs to double check in the
SSLEEP case that the signal being delivered is still masked or unmasked.
Remove the early return for action SIG_HOLD so that the SSLEEP case can
properly recheck the sigmask.
On top of this update siglist only in one place at the end of ptsignal
this now includes the clearing of signals for the SA_CONT and SA_STOP
cases.
OK mpi@
|
|
This fixes random gmake failures during ports builds caused by:
gmake[2]: *** read jobs pipe: Device busy. Stop.
Fix verified by tb@ on his bulk build box
OK mvs@ tb@
|
|
One atomic_clearbits_int() hiding in SSTOP was missed when converting all
the exceptions that cleared the siglist again. Instead of clearing the bits
the mask needs to be set to 0 so that it is properly ignored.
OK mpi@
|
|
unix(4) sockets.
Push solock() deep down to sosend() and remove it from soreceive() paths
for unix(4) sockets.
The transmission of unix(4) sockets already half-unlocked because
connected peer is not locked by solock() during sbappend*() call. Use
`sb_mtx' mutex(9) and `sb_lock' rwlock(9) to protect both `so_snd' and
`so_rcv'.
Since the `so_snd' is protected by `sb_mtx' mutex(9) the re-locking
is not required in uipc_rcvd().
Do direct `so_rcv' dispose and cleanup in sofree(). This sockets is
almost dead and unlinked from everywhere include spliced peer, so
concurrent sotask() thread will just exit. This required to keep locks
order between `i_lock' and `sb_lock'. Also this removes re-locking from
sofree() for all sockets.
SB_OWNLOCK became redundant with SB_MTXLOCK, so remove it. SB_MTXLOCK
was kept because checks against SB_MTXLOCK within sb*() routines are mor
consistent.
Feedback and ok bluhm
|
|
When a lock order reversal is found, perform a path search in the lock
order graph. This lets witness(4) display lock cycles that are longer
than two locks.
OK mpi@
|
|
Display lock subtypes in "show witness" output to reduce ambiguity.
OK mpi@
|
|
`so_rcv' has SB_MTXLOCK flag clean, not SB_OWNLOCK.
ok bluhm
|
|
No reason to lock peer. It can't be or became listening socket, both
sockets can't be in the middle of connecting or disconnecting.
ok bluhm
|
|
sosplice().
ok bluhm
|
|
Raw sockets are the simplest inet sockets, so use them to start landing
`sb_mtx' mutex(9) protection for `so_snd' buffer. Now solock() is taken
only around pru_send*(), the rest of sosend() serialized by sblock() and
`sb_mtx'. The unlocked SS_ISCONNECTED check is fine, because
rip{,6}_send() check it. Also, previously the SS_ISCONNECTED could be
lost due to solock() release around following m_getuio().
ok bluhm
|
|
ok mlarkin@
|
|
Use "netacc" for accept(2) and "netcon" for connect(2). Call sleep
in sys_ypconnect() "ypcon" to make it unique. sblock() now has
"sblock" to distinguish it from netlock.
OK claudio@ mvs@ kn@
|
|
|
|
KERNEL_LOCK. There is at least a race in sigsuspend which can be
triggered by dump(8). Should be enough to allow me to look for the
real cause.
|
|
mostly dead.
This is more like belts and suspenders since a proc in exit1() will not
receive signals anymore and so proc_stop() should not be reachable. This
is even the case when sigexit() is called and a coredump() is happening.
OK mpi@
|
|
Exiting procs will not return to userland and can not deliver signals so
it is better to not even try.
OK mpi@
|
|
OK mpi@
|
|
These sockets are not connection oriented, they don't call pru_rcvd(),
but they have splicing ability and they set `so_error'.
Splicing ability is the most problem. However, we can hold `sb_mtx'
around `ssp_socket' modifications together with solock(). So the
`sb_mtx' is pretty enough to isspiced() check in soreceive(). The
unlocked `so_sp' dereference is fine, because we set it only once for
the whole socket life-time and we do this before `ssp_socket'
assignment.
We also need to take sblock() before splice sockets, so the sosplice()
and soreceive() are both serialized. Since `sb_mtx' required to unsplice
sockets too, it also serializes somove() with soreceive() regardless on
somove() caller.
The sosplice() was reworked to accept standalone sblock() for udp(4)
sockets.
soreceive() performs unlocked `so_error' check and modification.
Previously, we have no ability to predict which concurrent soreceive()
or sosend() thread will fail and clean `so_error'. With this unlocked
access we could have sosend() and soreceive() threads which fails
together.
`so_error' stored to local `error2' variable because `so_error' could be
overwritten by concurrent sosend() thread.
Tested and ok bluhm
|
|
|
|
dosigsuspend() no longer needs it.
OK mvs@ mpi@
|
|
no functional change, found by smatch warnings
ok miod@ bluhm@
|