summaryrefslogtreecommitdiff
path: root/sys/kern
AgeCommit message (Collapse)Author
2024-07-03remove __mp_release_all_but_one(), unused since sched_bsd.c rev 1.92Jonathan Gray
ok claudio@
2024-06-28Restore original EPIPE and ENOTCONN errors priority in the uipc_send()Vitaliy Makkoveev
path changed in rev 1.206. At least acme-client(1) is not happy with this change. Reported by claudio. Tests and ok by bluhm.
2024-06-26Push socket re-lock to the vnode(9) release path within unp_detach().Vitaliy Makkoveev
The only reason to re-lock dying `so' is the lock order with vnode(9) lock, thus `unp_gc_lock' rwlock(9) could be taken after solock(). ok bluhm
2024-06-26return type on a dedicated line when declaring functionsJonathan Gray
ok mglocker@
2024-06-22remove space between function names and argument listJonathan Gray
2024-06-14Switch AF_ROUTE sockets to the new locking scheme.Vitaliy Makkoveev
At sockets layer only mark buffers as SB_MTXLOCK. At PCB layer only protect `so_rcv' with corresponding `sb_mtx' mutex(9). SS_ISCONNECTED and SS_CANTRCVMORE bits are redundant for AF_ROUTE sockets. Since SS_CANTRCVMORE modifications performed with both solock() and `sb_mtx' held, the 'unlocked' SS_CANTRCVMORE check in rtm_senddesync() is safe. ok bluhm
2024-06-05No need to call d_open/d_close for every hibernate resume i/o.Kenneth R Westerback
Speeds up resuming from hibernate. Testing florian@ stsp@ ok mlarkin@ stsp@
2024-06-04Enable hibernate/resume to nvme(4) disks with 4096 byte sectors.Kenneth R Westerback
testing by florian@ mglocker@ mlarkin@ ok deraadt@ mglocker@ mlarkin@
2024-06-03Remove lock_class_sched_lock from lock_classes since the correspondingClaudio Jeker
entry in enum lock_class_index was removed in sys/_lock.h You get fireworks if the lock_classes array and enum lock_class_index get out of sync.
2024-06-03Remove the now unsued s argument to SCHED_LOCK and SCHED_UNLOCK.Claudio Jeker
The SPL level is not tacked by the mutex and we no longer need to track this in the callers. OK miod@ mlarkin@ tb@ jca@
2024-06-03avoid shadowing a local variable in a lower scopeTheo de Raadt
2024-06-02pledge, MAP_STACK, and pinsyscall failures have been providing failureTheo de Raadt
visibility with kernel printf(9) (thus, onto console and into dmesg) since the start of development. I want to reduce the dmesg spam, and bring this more into the attention of the user who ran the command, so let's try using uprintf(9) which puts it onto the active foreground tty (yes, there maybe cases where there is no tty, but that's ok. I'll admit I've considered deleting the logging messages entirely) tested in snaps for a week
2024-05-29Convert SCHED_LOCK from a recursive kernel lock to a mutex.Claudio Jeker
Over the last weeks the last SCHED_LOCK recursion was removed so this is now possible and will allow to split up the SCHED_LOCK in a upcoming step. Instead of implementing an MP and SP version of SCHED_LOCK this just always uses the mutex implementation. While this makes the local s argument unused (the spl is now tracked by the mutex itself) it is still there to keep this diff minimal. Tested by many. OK jca@ mpi@
2024-05-28Garbage collect sleep_abort(); it doesn't do anything useful anymore.Mark Kettenis
ok deraadt@, mlarkin@
2024-05-26Implement wakeup interrupts on amd64. Provide a dummy implementation forMark Kettenis
i386 such that we can call the necessary hooks in the suspend/resume code without adding #ifdefs. Tweak the arm64 implementation such that we can call the hooks earlier as this is necessary to mask MSI and MSI-X interrupts on arm64. ok deraadt@, mlarkin@
2024-05-22When clearing the wait channel also clear the wait message.Claudio Jeker
There is no reason to keep the wait message in place since it will never show up even in ddb show proc output. OK jca@
2024-05-22In the big p_stat switch in ptsignal do not call return but insteadClaudio Jeker
use one of the gotos. In this case goto out with mask and prop set to 0. OK jca@
2024-05-22Just grab the SCHED_LOCK() once in donice() before walking the ps_threadsClaudio Jeker
list. setpriority() is trivial and probably faster than releasing and relocking SCHED_LOCK(). OK jca@
2024-05-20Rework interaction between sleep API and exit1() and start unlocking ps_threadsClaudio Jeker
This diff adjusts how single_thread_set() accounts the threads by using ps_threadcnt as initial value and counting all threads out that are already parked. In single_thread_check call exit1() before decreasing ps_singlecount this is now done in exit1(). exit1() and thread_fork() ensure that ps_threadcnt is updated with the pr->ps_mtx held and in exit1() also account for exiting threads since exit1() can sleep. OK mpi@
2024-05-20Drop MNT_LOCAL flag in corresponding `vfsconflist' fuse(4) entry insteadVitaliy Makkoveev
of cleaning it in fusefs_mount(). ok claudio
2024-05-18RegenPhilip Guenther
2024-05-18Add pathconfat(2): pathconf(2) but with at-fd and flags arguments,Philip Guenther
the latter supporting the ability to get timestamp resolution of symlinks. ok deraadt@ millert@
2024-05-17Turn sblock() to `sb_lock' rwlock(9) wrapper for all sockets.Vitaliy Makkoveev
Unify behaviour to all sockets. Now sblock() should be always taken before solock() in all involved paths as sosend(), soreceive(), sorflush() and sosplice(). sblock() is fine-grained lock which serializes socket send and receive routines on `so_rcv' or `so_snd' buffers. There is no big problem to wait netlock while holding sblock(). This unification removes a lot of temporary "sb_flags & SB_MTXLOCK" code from sockets layer. This unification makes straight "solock()" and "sblock()" lock order, no more solock() -> sblock() -> sounlock() -> solock() -> sbunlock() -> sounlock() chains in sosend() and soreceive() paths. This unification brings witness(4) support for sblock(), include NFS involved sockets, which is useful. Since the witness(4) support was introduced to sblock() with this diff, some new witness reports appeared. bulk(1) tests by tb, ok bluhm
2024-05-17Switch AF_KEY sockets to the new locking scheme.Vitaliy Makkoveev
The simplest case. Nothing to change in sockets layer, only set SB_MTXLOCK on socket buffers. ok bluhm
2024-05-14remove prototypes with no matching functionJonathan Gray
2024-05-13vfs: VOP_REMOVE: move vnode unlocking and ref dropping to FS-indep partSebastien Marie
while here, ensure all vop_remove field are set, and always call the function. the change is very conservative: it only adds vnode ref drop/unlock where it was absent because it should be unreachable (and if it wasn't, it should fix things). ok miod@
2024-05-12vfs: struct vops: show all members, even if NULLSebastien Marie
In order to help code maintenance, explicitly add all `struct vops` members with the current value (if not present, it is NULL), still using the C99 notation. ok miod@
2024-05-10RegenClaudio Jeker
2024-05-10The ptsignal() race against p_sigmask changes by dosigsuspend() are fixed.Claudio Jeker
Unlock sigsuspend() and __thrsigdivert() again.
2024-05-08Rework how action SIG_HOLD is handled in ptsignal.Claudio Jeker
Since we want to unlock sigsuspend, ptsignal needs to double check in the SSLEEP case that the signal being delivered is still masked or unmasked. Remove the early return for action SIG_HOLD so that the SSLEEP case can properly recheck the sigmask. On top of this update siglist only in one place at the end of ptsignal this now includes the clearing of signals for the SA_CONT and SA_STOP cases. OK mpi@
2024-05-07rw_enter() with RW_NOSLEEP returns EBUSY and not the expected EWOULDBLOCKClaudio Jeker
This fixes random gmake failures during ports builds caused by: gmake[2]: *** read jobs pipe: Device busy. Stop. Fix verified by tb@ on his bulk build box OK mvs@ tb@
2024-05-07In Rev 1.296 the update of the siglist was moved to the end of ptsignal().Claudio Jeker
One atomic_clearbits_int() hiding in SSTOP was missed when converting all the exceptions that cleared the siglist again. Instead of clearing the bits the mask needs to be set to 0 so that it is properly ignored. OK mpi@
2024-05-03Push solock() down to sosend() and remove it from soreceive() paths froVitaliy Makkoveev
unix(4) sockets. Push solock() deep down to sosend() and remove it from soreceive() paths for unix(4) sockets. The transmission of unix(4) sockets already half-unlocked because connected peer is not locked by solock() during sbappend*() call. Use `sb_mtx' mutex(9) and `sb_lock' rwlock(9) to protect both `so_snd' and `so_rcv'. Since the `so_snd' is protected by `sb_mtx' mutex(9) the re-locking is not required in uipc_rcvd(). Do direct `so_rcv' dispose and cleanup in sofree(). This sockets is almost dead and unlinked from everywhere include spliced peer, so concurrent sotask() thread will just exit. This required to keep locks order between `i_lock' and `sb_lock'. Also this removes re-locking from sofree() for all sockets. SB_OWNLOCK became redundant with SB_MTXLOCK, so remove it. SB_MTXLOCK was kept because checks against SB_MTXLOCK within sb*() routines are mor consistent. Feedback and ok bluhm
2024-05-03witness: Display lock cycles longer than two locksVisa Hankala
When a lock order reversal is found, perform a path search in the lock order graph. This lets witness(4) display lock cycles that are longer than two locks. OK mpi@
2024-05-03witness: Make "show witness" display lock subtypesVisa Hankala
Display lock subtypes in "show witness" output to reduce ambiguity. OK mpi@
2024-05-02Quick fix previous one. socantrcvmore() should raise assertion ifVitaliy Makkoveev
`so_rcv' has SB_MTXLOCK flag clean, not SB_OWNLOCK. ok bluhm
2024-05-02Don't re-lock sockets in uipc_shutdown().Vitaliy Makkoveev
No reason to lock peer. It can't be or became listening socket, both sockets can't be in the middle of connecting or disconnecting. ok bluhm
2024-05-02Pass `sosp' instead of `so' to sblock() when locking `so_snd' withinVitaliy Makkoveev
sosplice(). ok bluhm
2024-04-30Push solock() down to sosend() for SOCK_RAW sockets.Vitaliy Makkoveev
Raw sockets are the simplest inet sockets, so use them to start landing `sb_mtx' mutex(9) protection for `so_snd' buffer. Now solock() is taken only around pru_send*(), the rest of sosend() serialized by sblock() and `sb_mtx'. The unlocked SS_ISCONNECTED check is fine, because rip{,6}_send() check it. Also, previously the SS_ISCONNECTED could be lost due to solock() release around following m_getuio(). ok bluhm
2024-04-30Add '\n' to DPRINTF() string that used to be a panic() string.Kenneth R Westerback
ok mlarkin@
2024-04-25Rename socket wait channels when sleeping.Alexander Bluhm
Use "netacc" for accept(2) and "netcon" for connect(2). Call sleep in sys_ypconnect() "ypcon" to make it unique. sblock() now has "sblock" to distinguish it from netlock. OK claudio@ mvs@ kn@
2024-04-24RegenClaudio Jeker
2024-04-24Revert rev 1.261 and require sigsuspend and __thrsigdivert to takeClaudio Jeker
KERNEL_LOCK. There is at least a race in sigsuspend which can be triggered by dump(8). Should be enough to allow me to look for the real cause.
2024-04-18If a proc has P_WEXIT set do not stop it, let it exit since it is alreadyClaudio Jeker
mostly dead. This is more like belts and suspenders since a proc in exit1() will not receive signals anymore and so proc_stop() should not be reachable. This is even the case when sigexit() is called and a coredump() is happening. OK mpi@
2024-04-18Clear PCATCH for procs that have P_WEXIT set.Claudio Jeker
Exiting procs will not return to userland and can not deliver signals so it is better to not even try. OK mpi@
2024-04-17dogetrusage() must be called with the KERNEL_LOCK held for now.Claudio Jeker
OK mpi@
2024-04-15Don't take solock() in soreceive() for udp(4) sockets.Vitaliy Makkoveev
These sockets are not connection oriented, they don't call pru_rcvd(), but they have splicing ability and they set `so_error'. Splicing ability is the most problem. However, we can hold `sb_mtx' around `ssp_socket' modifications together with solock(). So the `sb_mtx' is pretty enough to isspiced() check in soreceive(). The unlocked `so_sp' dereference is fine, because we set it only once for the whole socket life-time and we do this before `ssp_socket' assignment. We also need to take sblock() before splice sockets, so the sosplice() and soreceive() are both serialized. Since `sb_mtx' required to unsplice sockets too, it also serializes somove() with soreceive() regardless on somove() caller. The sosplice() was reworked to accept standalone sblock() for udp(4) sockets. soreceive() performs unlocked `so_error' check and modification. Previously, we have no ability to predict which concurrent soreceive() or sosend() thread will fail and clean `so_error'. With this unlocked access we could have sosend() and soreceive() threads which fails together. `so_error' stored to local `error2' variable because `so_error' could be overwritten by concurrent sosend() thread. Tested and ok bluhm
2024-04-15Regen after sigsuspend and __thrsigdivert unlockClaudio Jeker
2024-04-15sigsuspend and __thrsigdivert no longer require the KERNEL_LOCK sinceClaudio Jeker
dosigsuspend() no longer needs it. OK mvs@ mpi@
2024-04-13correct indentationJonathan Gray
no functional change, found by smatch warnings ok miod@ bluhm@