Age | Commit message (Collapse) | Author |
|
ok mpi@ miod@
|
|
|
|
|
|
goes on in SMR.
OK mpi@
|
|
waiting on CPUs that didn't spin up. This will allow us to spin down
CPUs in the future to save power as well.
ok mpi@
|
|
those bits.
|
|
ok drahn@
|
|
Make the SMR thread maintain an explicit system-wide grace period and
make CPUs observe the current grace period when crossing a quiescent
state. This lets the SMR thread avoid a forced context switch for CPUs
that have already entered the latest grace period.
This change provides a small improvement in smr_grace_wait()'s
performance in terms of context switching.
OK mpi@, anton@
|
|
panic message shows the actual code location of the assert. Do this by
moving the assert logic inside the macros.
Prompted by and OK claudio@
OK mpi@
|
|
prevents the appearance of a "smr: dispatch took N seconds" message
during boot when there is an early smr_call(). Such a call can happen
with mfii(4). The initial dispatch cannot make progress until
smr_grace_wait() can visit all CPUs.
This fix is essentially a hack. It makes use of the fact that there
is no hard guarantee on how quickly the callback of smr_call() gets
invoked. It is assumed that the SMR call backlog does not grow large
during boot.
An alternative fix is to make smr_grace_wait() skip secondary CPUs
until they have been started. However, this could break if the spinup
logic of secondary CPUs was changed.
Delayed SMR dispatch reported and fix tested by Hrvoje Popovski
Discussed with and OK kettenis@, claudio@
|
|
ok mpi@
|
|
Equivalent to their unsuffixed counterparts except that (a) they take
a timeout in terms of nanoseconds, and (b) INFSLP, aka UINT64_MAX (not
zero) indicates that a timeout should not be set.
For now, zero nanoseconds is not a strictly valid invocation: we log a
warning on DIAGNOSTIC kernels if we see such a call. We still sleep
until the next tick in such a case, however. In the future this could
become some sort of poll... TBD.
To facilitate conversions to these interfaces: add inline conversion
functions to sys/time.h for turning your timeout into nanoseconds.
Also do a few easy conversions for warmup and to demonstrate how
further conversions should be done.
Lots of input from mpi@ and ratchov@. Additional input from tedu@,
deraadt@, mortimer@, millert@, and claudio@.
Partly inspired by FreeBSD r247787.
positive feedback from deraadt@, ok mpi@
|
|
because now the error is detected before context switch.
The sleep code path eventually calls assertwaitok() in mi_switch(),
so the assertwaitok() in the SMR barrier function is somewhat redundant
and can be removed.
OK mpi@
|
|
does not establish strong enough ordering between CPUs. Consequently,
smr_grace_wait() might incorrectly skip a CPU and invoke an SMR
callback too early.
Prompted by haesbaert@
|
|
checking done in taskq_barrier(9) and timeout_barrier(9).
OK mpi@
|
|
objects that readers can access without locking. This provides a basis
for read-copy-update operations.
Readers access SMR-protected shared objects inside SMR read-side
critical section where sleeping is not allowed. To reclaim
an SMR-protected object, the writer has to ensure mutual exclusion of
other writers, remove the object's shared reference and wait until
read-side references cannot exist any longer. As an alternative to
waiting, the writer can schedule a callback that gets invoked when
reclamation is safe.
The mechanism relies on CPU quiescent states to determine when an
SMR-protected object is ready for reclamation.
The <sys/smr.h> header additionally provides an implementation of
singly- and doubly-linked lists that can be used together with SMR.
These lists allow lockless read access with a concurrent writer.
Discussed with many
OK mpi@ sashan@
|