Age | Commit message (Collapse) | Author |
|
the sense that it guarantees that the specified CPU went through the
scheduler. This also guarantees that interrupt handlers running on that CPU
will have finished when sched_barrier() returns.
ok miod@, guenther@
|
|
over the tree.
much encouragement from l2k15
|
|
found by deraadt@
|
|
sleep_setup/sleep_finish.
|
|
refcnt(9) can be used.
|
|
its basically atomic inc/dec, but it includes magical sleep code
in refcnt_finalise that is better written once than many times.
refcnt_finalise sleeps until all references are released and does
so with sleep_setup and sleep_finalize, which is fairly subtle.
putting this in now so i we can get on with work in the stack, a
proper discussion about visibility and how available intrinsics
should be in the kernel can happen after next week.
with help from guenther@
ok guenther@ deraadt@ mpi@
|
|
logic a bit so that an invalid primary header/partition entries
table does not cause readgptlabel() to exit before the secondary
header is tried.
|
|
layer because the strings select the right options. Mechanical
conversion.
ok guenther
|
|
get_fstype() to gpt_get_fstype() as it moves.
|
|
of repeated lehto32() and lehtoh64() in readgptlabel() to make code
more readable.
|
|
to use.
|
|
tracking our discovery of the first OpenBSD partition (ourpart) and
just use the variable holding the offset of the first OpenBSD
partition (gptpartoff).
Move initialization of gptpartoff and gptpartend closer to their
use and set them when the first OpenBSD partition is found. Thus
eliminating a later 'if' statement.
|
|
|
|
|
|
dropped message error log.
OK benno@
|
|
for readgptlable() to re-check that the label d_secpercyl and
d_secsize are not 0.
|
|
with MBR EFI SYSTEM partitions.
|
|
multi page backend allocator implementation no longer needs to grab the
kernel lock.
ok mlarkin@, dlg@
|
|
function value the variable was being set to.
|
|
ok deraadt@
|
|
ok deraadt@ miod@
|
|
found, as is done in MBR processing.
|
|
without proper device trees.
Be sure to build and install config(8) and rerun it before attempting to build
a kernel.
ok kettenis@ deraadt@ jasper@ visa@
|
|
|
|
accidentally capture disks ...
Eliminate kernel option GPT and associated #ifdef GPT/#endif. Let
everybody get on the GPT bandwagon and we'll see what wheels fly
off.
Requested by & ok deraadt@
|
|
Call it if and only if there is an MBR on sector 0 that contains 1
and only 1 partition; that partition is an EFI partition; and it
covers the entire disk or as much of the disk as can be covered in
an MBR partition.
Be paranoid about restoring any possible tweaks to the label being
built in the case that readgptlabel() fails, and in that case return
to the readdoslabel() code.
ok deraadt@
|
|
found. Keep going until we spoof 8 or run out of partitions needing
spoofing.
|
|
It has already been initialized in the MD readdisklabel() routines
when they call initdisklabel().
ok deraadt@
|
|
|
|
which results in tame() code placements being much more recognizeable.
tame() can be moved to unistd.h and does not need cpp symbols to turn the
bits on and off. The resulting API is a bit unexpected, but simplifies the
mapping to enabling bits in the kernel substantially.
vague ok's from various including guenther doug semarie
|
|
this allows us to build lists of things that can be followed by
multiple cpus.
ok mpi@ claudio@
|
|
* pool_allocator_single: single page allocator, always interrupt safe
* pool_allocator_multi: multi-page allocator, interrupt safe
* pool_allocator_multi_ni: multi-page allocator, not interrupt-safe
ok deraadt@, dlg@
|
|
isn't specified) the default backend allocator implementation no longer
needs to grab the kernel lock.
ok visa@, guenther@
|
|
and doing VOP_WRITE() from inside tsleep/msleep makes the locking too
complicated, making it harder to move forward on MP changes.
ok deraadt@ kettenis@
|
|
in the (default) single page pool backend allocator. This means it is now
safe to call pool_get(9) and pool_put(9) for "small" items while holding
a mutex without holding the kernel lock as well as these functions will
no longer acquire the kernel lock under any circumstances. For "large" items
(where large is larger than 1/8th of a page) this still isn't safe though.
ok dlg@
|
|
length of the key as argument.
This way every consumer of the radix tree has a chance to explicitly
initialize the shared data structures and no longer rely on another
subsystem to do the initialization.
As a bonus ``dom_maxrtkey'' is no longer used an die.
ART kernels should now be fully usable because pf(4) and IPSEC properly
initialized the radix tree.
ok chris@, reyk@
|
|
|
|
log atempts. sendsyslog(2) is a good place to detect and report
the problem.
OK deraadt@
|
|
to return failure. open() of these paths should succeed to satisfy
strerror() and friends.
ok semarie
|
|
ok doug@
|
|
functions. Note that these calls are deliberately not added to the
special-purpose back-end allocators in the various pmaps. Those allocators
either don't need to grab the kernel lock, are always called with the kernel
lock already held, or are only used on non-MULTIPROCESSOR platforms.
pk tedu@, deraadt@, dlg@
|
|
hazard pointers were becoming corrupt and therefore panics.
the problem turned out to be that bridge_input calls if_input on
behalf of a hardware interface which then calls bpf_mtap at splsoftnet,
while the actual hardware nic calls if_input and bpf_mtap at splnet.
the hardware interrupts ran in the middle of the bpf calls bridge
runs at softnet. this means the same srps are being entered and
left on the same cpu at different ipls, which led to races because
of the order of operations on the per cpu hazard pointers.
after a lot of experimentation, jmatthew@ figured out how to deal
with this problem without introducing per cpu critical sections
(ie, splhigh) calls in srp_enter and srp_leave, and without introducing
atomic operations.
the solution is to iterate forward through the array of hazard
pointers in srp_enter, and backward in srp_leave to clear. if you
guarantee that you leave srps in the reverse order to entering them,
then you can use the same set of SRPs at different IPLs on the same
CPU.
the ordering requirement is a problem if we want to build linked
data structures out of srps because you need to hold a ref to the
current element containing the next srp to use it, before giving
up the current ref. we're adding srp_follow() to support taking the
next ref and giving up the current one while preserving the structure
of the hazard pointer list. srp_follow() does this by reusing the
hazard pointer for the current reference for the next ref.
both mattieu baptiste and jmatthew@ have been hitting this pretty
hard with a tweaked version of srp+bpf that uses srp_follow instead
of interleaved srp_enter/srp_leave sequences. neither can reproduce
the panics anymore.
thanks to mattieu for the report and tests
ok jmatthew@
|
|
|
|
|
|
|
|
want to do the same for fstatfs(), after we handle statfs(). These system
calls leak path information, however I am reluctant to add a seperate
catagory.
|
|
cr_uid/cr_gid (effective ids). Thus, chown(, -1,-1) should work OK, so
should chown(, me, -1), etc. With this commited, more people can test.
|
|
those bits in the request and continue. This is a better posix-subset
to give to programs.
|
|
Cleaner, clearer and less error prone.
Tested by bmercer@ as part of a larger diff, of which this is the
last part.
reads ok to jsing@ kettenis@. ok deraadt@.
|
|
sector containing the disklabel, eliminating an unnecessary " *
DL_BLKSPERSEC()".
Tested by bmercer@ as part of larger diff.
Idea from & reads ok to jsing@. ok kettenis@.
|