summaryrefslogtreecommitdiff
path: root/sys/kern
AgeCommit message (Collapse)Author
2021-01-29Use NULL instead of 0 to clear v_socket pointer (which actually clears allClaudio Jeker
of the v_un pointers). OK jsg@ mvs@
2021-01-29Whitespace.rob
2021-01-28Show when witness(4) has run out of lock order data entries.Visa Hankala
This makes it clearer why lock order traces are sometimes not displayed. Prompted by a question from, and OK anton@
2021-01-27kqueue: Fix termination assertVisa Hankala
When a kqueue file is closed, the kqueue can still have threads scanning it. Consequently, kqueue_terminate() can see scan markers in the event queue. These markers are removed when the scanning threads leave the kqueue. Take this into account when checking the queue's state, to avoid a panic when kqueue is closed from under a thread. OK anton@ Reported-by: syzbot+757c60a2aa1125137cce@syzkaller.appspotmail.com
2021-01-20If pledge "wroute" is missing for setsockopt SO_RTABLE, print failureAlexander Bluhm
message "wroute" into dmesg. Since revision 1.263 pledge "wroute" allows to change the routing table of a socket. OK florian@ semarie@
2021-01-19kern/subr_disk.c: convert ifunit() to if_unit(9)mvs
ok dlg@
2021-01-19/etc/malloc.conf path-approval in pledge is no longer needed since 6.5Theo de Raadt
moved option control into a sysctl. reminder that we can delete this from benjamin baier
2021-01-18regenmvs
2021-01-18Unlock getppid(2).mvs
ok mpi@
2021-01-17Cache parent's pid as `ps_ppid' and use it instead of `ps_pptr->ps_pid'.mvs
This allows us to unlock getppid(2). ok mpi@
2021-01-17kqueue: Revise fd close notificationVisa Hankala
Deliver file descriptor close notification for __EV_POLL knotes through struct kevent that kqueue_scan() returns. This replaces the previous way of returning EBADF from kqueue_scan(), making it easier to determine what exactly has changed. When a file descriptor is closed, its __EV_POLL knotes are turned into one-shot events and queued for delivery. These knotes are "unregistered" as they are reachable only through the queue of active events. This reduces interference with the normal workings of kqueue. However, more care is needed to avoid leaking knotes. In addition, the unregistering removes a limit on the number of issued knotes. To prevent accumulation of pending fd close notifications, kqpoll_init() flushes the active queue at the start of a kqpoll scan. OK mpi@
2021-01-17Replace SB_KNOTE and sb_flagsintr with direct checking of klist.Visa Hankala
OK mpi@ as part of a larger diff
2021-01-14syncer_thread: sleep without lboltcheloha
The syncer_thread() uses lbolt to perform periodic execution. We can do this without lbolt. - Adding a local wakeup(9) channel (syncer_chan) and sleep on it. - Use a local copy of getnsecuptime() to get 1/hz resolution for time measurements. This is much better than using gettime(9), which is wholly unsuitable for this use case. Measure how long we spend in the loop and use this to calculate how long to sleep until the next execution. NB: getnsecuptime() is probably ready to be moved to kern_tc.c and documented. - Using the system uptime instead of the UTC time avoids issues with time jumps. ok mpi@
2021-01-13kernel, sysctl(8): remove dead variable: tickadjcheloha
The global "tickadj" variable is a remnant of the old NTP adjustment code we used in the kernel before the current timecounter subsystem was imported from FreeBSD circa 2004 or 2005. Fifteen years hence it is completely vestigial and we can remove it. We probably should have removed it long ago but I guess it slipped through the cracks. FreeBSD removed it in 2002: https://cgit.freebsd.org/src/commit/?id=e1d970f1811e5e1e9c912c032acdcec6521b2a6d NetBSD and DragonflyBSD can probably remove it, too. We export tickadj via the kern.clockrate sysctl(2), so update sysctl.2 and sysctl(8) accordingly. Hypothetically this change could break someone's sysctl(8) parsing script. I don't think that's very likely. ok mvs@
2021-01-13Convert mbuf type KDASSERT() to a proper KASSERT() in m_get(9).Alexander Bluhm
Should prevent to use uninitialized value as bogus counter index. OK mvs@ claudio@ anton@
2021-01-11New rw_obj_init() API providing reference-counted rwlock.Martin Pieuchot
Original port from NetBSD by guenther@, required for upcoming amap & anon locking. ok kettenis@
2021-01-11Simplify sleep signal handling a bit by introducing sleep_signal_check().Claudio Jeker
The common code is moved to sleep_signal_check() and instead of multiple state variables for sls_sig and sls_unwind only one sls_sigerr is set. This simplifies the checks in sleep_finish_signal() a great bit. Idea from and OK mpi@
2021-01-09Split hierarchical calls into kern_sysctl_dirsgnezdo
Removed a rash of +/-1 and made both functions shorter and more focused. OK millert@
2021-01-09Reduce case duplication in kern_sysctlgnezdo
This changes amd64 GENERIC.MP .text size of kern_sysctl.o from 6440 to 6400. Surprisingly, RAMDISK grows from 1645 to 1678. OK millert@, mglocker@
2021-01-09Enforce range with sysctl_int_bounded in sysctl_wdoggnezdo
OK millert@
2021-01-09Enforce range with sysctl_int_bounded in witness_sysctl_watchgnezdo
Makes previously explicit checking less verbose. OK millert@
2021-01-09Use sysctl_int_bounded in sysctl_hwsmtgnezdo
Prefer error reporting is to silent clipping. OK millert@
2021-01-09If the loop check in somove(9) goes to release without setting anAlexander Bluhm
error, a broadcast mbuf will stay in the socket buffer forever. This is bad as multiple mbufs can use up all the space. Better report ELOOP, dissolve splicing, and let userland handle it. OK anton@
2021-01-09Replace a custom linked list with SLIST.Visa Hankala
2021-01-09Replace SIMPLEQ with SLIST because the code does not need a queue.Visa Hankala
2021-01-09Remove unnecessary relocking of w_mtx as panic() should not return.Visa Hankala
2021-01-08Lock kernel before raising SPL in klist_lock()Visa Hankala
This prevents unwanted spinning with interrupts disabled. At the moment, this code is only invoked through klist_invalidate() and the callers should already hold the kernel lock. Also, one could argue that in MP-unsafe contexts klist_lock() should only assert for the kernel lock.
2021-01-08Fix boot-time crash on sparc64Visa Hankala
On sparc64, initmsgbuf() is invoked before curcpu() is usable on the boot processor. Consequently, it is unsafe to use mutexes during the message buffer initialization. Avoid such use by skipping log_mtx when appending a newline from initmsgbuf(). Use mbp instead of msgbufp as the buffer argument to the putchar routine for consistency. Bug reported and fix suggested by miod@
2021-01-08Revert "Implement select(2) and pselect(2) on top of kqueue."Visa Hankala
The use of kqueue as backend has introduced a significant regression in the performance of select(2), so go back to using the original code. Some additional management overhead is to be expected when using kqueue. However, the overhead of the current implementation is too high. Reported by bluhm@ on bugs@
2021-01-07Adjust comment about klist_invalidate()Visa Hankala
2021-01-06Add dt(4) TRACEPOINTs for pool_get() and pool_put(), this is simmilar to theClaudio Jeker
ones added to malloc() and free(). Pass the struct pool pointer as argv1 since it is currently not possible to pass the pool name to btrace. OK mpi@
2021-01-02pool(9): remove tickscheloha
Change the pool(9) timeouts to use the system uptime instead of ticks. - Change the timeouts from variables to macros so we can use SEC_TO_NSEC(). This means these timeouts are no longer patchable via ddb(4). dlg@ does not think this will be a problem, as the timeout intervals have not changed in years. - Use low-res time to keep things fast. Add a local copy of getnsecuptime() to subr_pool.c to keep the diff small. We will need to move getnsecuptime() into kern_tc.c and document it later if we ever have other users elsewhere in the kernel. - Rename ph_tick -> ph_timestamp and pr_cache_tick -> pr_cache_timestamp. Prompted by tedu@ some time ago, but the effort stalled (may have been my fault). Input from kettenis@ and dlg@. Special thanks to mpi@ for help with struct shuffling. This change does not increase the size of struct pool_page_header or struct pool. ok dlg@ mpi@
2021-01-01copyright++;Jonathan Gray
2020-12-31Add trace points for malloc(9) and free(9). This makes them traceableClaudio Jeker
via dt(4) and btrace(8). OK mpi@ millert@
2020-12-30Set klist lock for pipes.Visa Hankala
OK anton@, mpi@
2020-12-28Analog to the the kern.audio.record sysctl parameter for audio(4)Marcus Glocker
devices, introduce kern.video.record for video(4) devices. By default kern.video.record will be set to zero, blanking all data delivered by device drivers which attach to video(4). The idea was initially proposed by Laurence Tratt <laurie AT tratt DOT net>. ok mpi@
2020-12-28Use per-CPU counters for fault and stats counters reached in uvm_fault().Martin Pieuchot
ok kettenis@, dlg@
2020-12-26Simplify parameters of pselregister().Visa Hankala
OK mpi@
2020-12-25Refactor klist insertion and removalVisa Hankala
Rename klist_{insert,remove}() to klist_{insert,remove}_locked(). These functions assume that the caller has locked the klist. The current state of locking remains intact because the kernel lock is still used with all klists. Add new functions klist_insert() and klist_remove() that lock the klist internally. This allows some code simplification. OK mpi@
2020-12-25Small smr_grace_wait() optimizationVisa Hankala
Make the SMR thread maintain an explicit system-wide grace period and make CPUs observe the current grace period when crossing a quiescent state. This lets the SMR thread avoid a forced context switch for CPUs that have already entered the latest grace period. This change provides a small improvement in smr_grace_wait()'s performance in terms of context switching. OK mpi@, anton@
2020-12-24tsleep(9): add global "nowake" channel for threads avoiding wakeup(9)cheloha
It would be convenient if there were a channel a thread could sleep on to indicate they do not want any wakeup(9) broadcasts. The easiest way to do this is to add an "int nowake" to kern_synch.c and extern it in sys/systm.h. You use it like this: #include <sys/systm.h> tsleep_nsec(&nowait, ...); There is now no need to handroll a local dead channel, e.g. int chan; tsleep_nsec(&chan, ...); which expands the stack. Local dead channels will be replaced with &nowake in later patches. One possible problem with this "one global channel" approach is sleep queue congestion. If you have lots of threads sleeping on &nowake you might slow down a wakeup(9) on a different channel that hashes into the same queue. Unsure how much of problem this actually is, if at all. NetBSD and FreeBSD have a "pause" interface in the kernel that chooses a suitable channel automatically. To keep things simple and avoid adding a new interface we will start with this global channel. Discussed with mpi@, claudio@, kettenis@, and deraadt@. Basically designed by kettenis@, who vetoed my other proposals. Bugs caught by deraadt@, tb@, and patrick@.
2020-12-23sigsuspend(2): change wmesg from "pause" to "sigsusp"cheloha
Make it obvious where the thread is blocked. "pause" is ambiguous. Tweaked by kettenis@. Probably ok kettenis@.
2020-12-23nanosleep(2): shorten wmesg from "nanosleep" to "nanoslp"cheloha
We only see 8 characters of wmesg in e.g. top(1), so shorten the string to fit. Indirectly prompted by kettenis@.
2020-12-23Ensure that filt_dead() takes effectVisa Hankala
Invoke dead_filtops' f_event callback in klist_invalidate() to ensure that filt_dead() modifies every invalidated knote. If a knote has EV_ONESHOT set in its event flags, kqueue_scan() will not call f_event. OK mpi@
2020-12-23Clear error before each iteration in kqueue_scan()Visa Hankala
This fixes a regression where kqueue_scan() may incorrectly return EWOULDBLOCK after a timeout. OK mpi@
2020-12-22Implement select(2) and pselect(2) on top of kqueue.Martin Pieuchot
The given set of fds are converted to equivalent kevents using EV_SET(2) and passed to the scanning internals of kevent(2): kqueue_scan(). ktrace(1) will now output the converted kevents on top of the usuals set bits to be able to find possible error in the convertion. This switch implies that select(2) and pselect(2) will now query the underlying kqfilters instead of the *_poll() routines. Based on similar work done on DragonFlyBSD with inputs from from visa@, millert@, anton@, cheloha@, thanks! ok visa@
2020-12-20Introduce klistopsVisa Hankala
This patch extends struct klist with a callback descriptor and an argument. The main purpose of this is to let the kqueue subsystem assert when a klist should be locked, and operate the klist lock in klist_invalidate(). Access to a knote list of a kqueue-monitored object has to be serialized somehow. Because the object often has a lock for protecting its state, and because the object often acquires this lock at the latest in its f_event callback function, it makes sense to use this lock also for the knote lists. The existing uses of NOTE_SUBMIT already show a pattern that is likely to become more prevalent. There could be an embedded lock in klist. However, such a lock would be redundant in many cases. The code cannot rely on a single lock type (mutex, rwlock, something else) because the needs of monitored objects vary. In addition, an embedded lock would introduce new lock order constraints. Note that the patch does not rule out use of dedicated klist locks. The patch introduces a way to associate lock operations with a klist. The caller can provide a custom implementation, or use a ready-made interface with a mutex or rwlock. For compatibility with old code, the new code falls back to using the kernel lock if no specific klist initialization has been done. The existing code already relies on implicit initialization of klist. Sadly, this change increases the size of struct klist. dlg@ thinks this is not fatal, though. OK mpi@
2020-12-18Add fd close notification for kqueue-based poll() and select()Visa Hankala
When the file descriptor of an __EV_POLL-flagged knote is closed, post EBADF through the kqueue instance to the caller of kqueue_scan(). This lets kqueue-based poll() and select() preserve their current behaviour of returning EBADF when a polled file descriptor is closed concurrently. OK mpi@
2020-12-18Make knote_{activate,remove}() internal to kern_event.c.Visa Hankala
OK mpi@
2020-12-16Remove kqueue_free() and use KQRELE() in kqpoll_exit().Visa Hankala
Because kqpoll instances are now linked to the file descriptor table, the freeing of kqpoll and ordinary kqueues is similar. Suggested by mpi@