Age | Commit message (Collapse) | Author |
|
|
|
|
|
rnd.c uses nanotime to get access to some bits that change quickly
between events that it can mix into the entropy pool. it doesn't
use nanotime to get a monotonically increasing set or ordered and
accurate timestamps, it just wants something with bits that change.
there's been discussions for years about letting rnd use a clock
that's super fast to read, but not necessarily accurate, but it
wasn't until recently that i figured out it wasn't interested in
time at all, so things like keeping a fast clock coherent between
cpu cores or correct according to ntp is unecessary. this means we
can just let rnd read the cycle counters on cpus and things will
be fine. cpus with cycle counters that vary in their speed and
arent kept consistent between cores may even be desirable in this
context.
so this is the first step in converting rnd.c to reading cycle
counter. it copies the nanotime backend to each arch, and they can
replace it with something MD as a second step later on.
djm@ suggested rnd_messybytes, but we landed on cpu_rnd_messybits.
thanks to visa for his eyes.
ok deraadt@ visa@
deraadt@ says he will help handle any MD fallout that occurs.
|
|
conversion steps). it only contains kernel prototypes for 4 interfaces,
all of which legitimately belong in sys/systm.h, which are already included
by all enqueue_randomness() users.
|
|
|
|
and (b) the boot-time acceleration.
|
|
dequeue to mix a selection of "best" ring entries. Change the dequeue timeout
to exponentially backoff because excessive pool buffer generation is pointless
-- rekey's generally happen at 1.6MB and a long timeout, a lot of cpu cycles
were being wasted.
During boot-up (before timeouts work) aggressively consume enqueue damage
and rekey every time, to accelerate entropy injection into the chacha ring.
The goal is to compensate rapidly for weak seeding in unidentifiable
conditions, and ensure quality to arc4random() calls early in boot.
ok kettenis visa
|
|
instead of hand-rolling the same code to set up a temporary ChaCha
instance.
tweak/ok semarie@, ok deraadt@
|
|
will frantically compensate.
ok kettenis
|
|
But it was a constant, that is really silly. Pass back the first
word from the middle layer.
ok visa
|
|
enqueue_randomness(), so make them local static instead of global.
|
|
defining the [size]
|
|
|
|
|
|
and change wording from 'entropy queue', what we have is a ring which
collects 'damage' from successive calls until drawn down
|
|
it suggests we should reconsider this mechanism and do something
simpler... delete the explanation for now.
|
|
misleading, so rewrite it.
The interesting parts are bootblock-seeding from file + hwrng,
arc4random() being available incredibly early, and seperate timeouts
to pull entropy data forward into a stir of the chacha state (one for
entropy ring crc whitening into a buffer, the 2nd for buffer folding
into the chacha)
Now that it is better documented, I can try to improve each component.
|
|
|
|
ok deraadt@
|
|
hit by millert
|
|
ok djm markus
|
|
adding more filter properties without cluttering the struct.
OK mpi@, anton@
|
|
make the structs const so that the data are put in .rodata.
OK mpi@, deraadt@, anton@, bluhm@
|
|
this gets rid of the source annotation which doesn't really add
anything other than adding complexitiy. randomess is generally
good enough that the few extra bits that the source type would
add are not worth it.
ok mikeb@ deraadt@
|
|
random data. But a new source of entropy arrived a few months ago
-- KARL generates highly disturbed images for some kernels (well,
not for bsd.rd)
This assumes the tail of text (just before etext[]) is readable.
We are trying to use a portable symbol name, and also avoid reading
a locore0 which has been unmapped...
ok mortimer
|
|
ok deraadt@
|
|
grabbing the rnglock repeatedly.
ok deraadt@ djm@
|
|
|
|
A lot of randomness event producers are executed in the interrupt
context increasing the time spent in the interrupt handler resulting
in extra costs when adding randomness data to the pool. However, in
practice randomness event producers require interlocking between each
other, but not with with consumers due to the opportunistic nature of
event consumers.
To be able to take advantage of this idea, the ring buffer indexing
is now done with two free running producer and consumer counters modulo
power of 2 size of the ring buffer.
With input from and OK visa, tb, jasper
|
|
sections, such as __attribute__((section(".openbsd.randomdata"))), may be
non-zero. In combination with "const" or "static" the compiler becomes even
more sure nothing can influence the object and assumes the value will be 0.
A few optimizations later, a security requirement has been removed.
Until a better annotation arrives in compilers, be warned: Do not mix
const or static with these random objects, you won't get what you want.
Spotted in a regression test by bluhm, long discussion with kettenis.
|
|
|
|
|
|
too late, leading to poor rng in the kernel early on. a behavioural
artifact in vmm spotted the issue.
ok tedu guenther mlarkin
|
|
|
|
remove it another relic of the superstitious past.
ok deraadt millert mikeb
|
|
from rob pierce
|
|
known and we rely on the bootpath to prime us anyways.
This also solves the issue raised by kettenis, of version potentially
being non-word aligned
ok kettenis djm
|
|
beyond the end of .text/.rodata.
ok deraadt@
|
|
the alias mapping when clearing it, since there is no guarantee the pool is
page aligned.
ok deraadt@
|
|
also do so in the kernel, which gains us RO ssp cookie, which will prevent
spraying attacks.
The random layer was openbsd.randomdata annotating working entropy/chacha
buffers which in turn required them to be RW. To make that work again,
so we need to copy RO seeds to RW working buffers, and later clear the
RO seed buffers afterwards using a temporary RW mapping.
help & ok kettenis, ok guenther
|
|
"another leftover of the bean counter"
od tedu@ deraadt@
|
|
ones are capable of giving valuable works vs does-not-work evidence.
ok tedu
|
|
we don't drop any events when the queue is full. They are instead mixed
into previous events.
The mixing function selected is addition instead of xor to reduce the
possibility that new values effectively erase existing ones.
Convert some types to u_int to ensure defined overflow.
ok deraadt djm
|
|
Pointed out by Martin Natano, slightly tweaked by me.
ok deraadt@
|
|
Diff from Martin Natano, thanks!
ok kettenis@, deraadt@
|
|
from Martin Natano (and also reported by Stefan Kempf)
|
|
specify custom counter value when setting up Chacha context.
ok reyk djm
|
|
|
|
ok djm@ miod@ deraadt@
|
|
have any direct symbols used. Tested for indirect use by compiling
amd64/i386/sparc64 kernels.
ok tedu@ deraadt@
|