summaryrefslogtreecommitdiff
path: root/sys/kern/kern_smr.c
AgeCommit message (Collapse)Author
2022-08-14remove unneeded includes in sys/kernJonathan Gray
ok mpi@ miod@
2021-11-24Fix type of count.Visa Hankala
2021-11-24Simplify arithmetics on the main path.Visa Hankala
2021-11-24Add a few dt(4) TRACEPOINTS to SMR. Should help to better understand whatClaudio Jeker
goes on in SMR. OK mpi@
2021-07-06Introduce CPU_IS_RUNNING() and us it in scheduler-related code to preventMark Kettenis
waiting on CPUs that didn't spin up. This will allow us to spin down CPUs in the future to save power as well. ok mpi@
2021-06-29Didn't intend to commit the CPU_IS_RUNNING() changes just yet, so revertMark Kettenis
those bits.
2021-06-29SMP support. Mostly works, but occasionally craps out during boot.Mark Kettenis
ok drahn@
2020-12-25Small smr_grace_wait() optimizationVisa Hankala
Make the SMR thread maintain an explicit system-wide grace period and make CPUs observe the current grace period when crossing a quiescent state. This lets the SMR thread avoid a forced context switch for CPUs that have already entered the latest grace period. This change provides a small improvement in smr_grace_wait()'s performance in terms of context switching. OK mpi@, anton@
2020-04-03Adjust SMR_ASSERT_CRITICAL() and SMR_ASSERT_NONCRITICAL() so that theVisa Hankala
panic message shows the actual code location of the assert. Do this by moving the assert logic inside the macros. Prompted by and OK claudio@ OK mpi@
2020-02-25Start the SMR thread when all CPUs are ready for scheduling. ThisVisa Hankala
prevents the appearance of a "smr: dispatch took N seconds" message during boot when there is an early smr_call(). Such a call can happen with mfii(4). The initial dispatch cannot make progress until smr_grace_wait() can visit all CPUs. This fix is essentially a hack. It makes use of the fact that there is no hard guarantee on how quickly the callback of smr_call() gets invoked. It is assumed that the SMR call backlog does not grow large during boot. An alternative fix is to make smr_grace_wait() skip secondary CPUs until they have been started. However, this could break if the spinup logic of secondary CPUs was changed. Delayed SMR dispatch reported and fix tested by Hrvoje Popovski Discussed with and OK kettenis@, claudio@
2019-12-30convert infinite msleep(9) to msleep_nsec(9)Jonathan Gray
ok mpi@
2019-07-03Add tsleep_nsec(9), msleep_nsec(9), and rwsleep_nsec(9).cheloha
Equivalent to their unsuffixed counterparts except that (a) they take a timeout in terms of nanoseconds, and (b) INFSLP, aka UINT64_MAX (not zero) indicates that a timeout should not be set. For now, zero nanoseconds is not a strictly valid invocation: we log a warning on DIAGNOSTIC kernels if we see such a call. We still sleep until the next tick in such a case, however. In the future this could become some sort of poll... TBD. To facilitate conversions to these interfaces: add inline conversion functions to sys/time.h for turning your timeout into nanoseconds. Also do a few easy conversions for warmup and to demonstrate how further conversions should be done. Lots of input from mpi@ and ratchov@. Additional input from tedu@, deraadt@, mortimer@, millert@, and claudio@. Partly inspired by FreeBSD r247787. positive feedback from deraadt@, ok mpi@
2019-05-17Add SMR_ASSERT_NONCRITICAL() in assertwaitok(). This eases debuggingVisa Hankala
because now the error is detected before context switch. The sleep code path eventually calls assertwaitok() in mi_switch(), so the assertwaitok() in the SMR barrier function is somewhat redundant and can be removed. OK mpi@
2019-05-16Remove incorrect optimization. The current logic for skipping idle CPUsVisa Hankala
does not establish strong enough ordering between CPUs. Consequently, smr_grace_wait() might incorrectly skip a CPU and invoke an SMR callback too early. Prompted by haesbaert@
2019-05-14Add lock order checking for smr_barrier(9). This is similar to theVisa Hankala
checking done in taskq_barrier(9) and timeout_barrier(9). OK mpi@
2019-02-26Introduce safe memory reclamation, a mechanism for reclaiming sharedVisa Hankala
objects that readers can access without locking. This provides a basis for read-copy-update operations. Readers access SMR-protected shared objects inside SMR read-side critical section where sleeping is not allowed. To reclaim an SMR-protected object, the writer has to ensure mutual exclusion of other writers, remove the object's shared reference and wait until read-side references cannot exist any longer. As an alternative to waiting, the writer can schedule a callback that gets invoked when reclamation is safe. The mechanism relies on CPU quiescent states to determine when an SMR-protected object is ready for reclamation. The <sys/smr.h> header additionally provides an implementation of singly- and doubly-linked lists that can be used together with SMR. These lists allow lockless read access with a concurrent writer. Discussed with many OK mpi@ sashan@