Age | Commit message (Collapse) | Author |
|
the ioff argument to pool_init() is unused and has been for many
years, so this replaces it with an ipl argument. because the ipl
will be set on init we no longer need pool_setipl.
most of these changes have been done with coccinelle using the spatch
below. cocci sucks at formatting code though, so i fixed that by hand.
the manpage and subr_pool.c bits i did myself.
ok tedu@ jmatthew@
@ipl@
expression pp;
expression ipl;
expression s, a, o, f, m, p;
@@
-pool_init(pp, s, a, o, f, m, p);
-pool_setipl(pp, ipl);
+pool_init(pp, s, a, ipl, f, m, p);
|
|
pointed out by guenther@
|
|
ok jsg@ (who spotted the powerpc straggler too) millert@
|
|
tests out on powerpc and generates slightly better code
|
|
on a guess how much memory a typical machine has. If the value is
too high, users may run out of kernel memory. Then we will have
to adjust this again.
OK claudio@ deraadt@
|
|
this is a step toward making ipls unconditionaly on pools.
ok deraadt@ kettenis@
|
|
inside the sigcontext. sigreturn(2) checks syscall entry was from the
exact PC addr in the (per-process ASLR) sigtramp, verifies the cookie,
and clears it to prevent sigcontext reuse.
not yet tested on landisk, sparc, *88k, socppc.
ok kettenis
|
|
artifacts seen in X on some G5 machines. Unfortunately not enough to fix
G4 machines. With help from Marcus Glocker.
ok mpi@
|
|
This stores errno, the cancelation flags, and related bits for each thread
and is allocated by ld.so or libc.a. This is an ABI break from 5.9-stable!
Make libpthread dlopen'able by moving the cancelation wrappers into libc
and doing locking and fork/errno handling via callbacks that libpthread
registers when it first initializes. 'errno' *must* be declared via
<errno.h> now!
Clean up libpthread's symbol exports like libc.
On powerpc, offset the TIB/TCB/TLS data from the register per the ELF spec.
Testing by various, particularly sthen@ and patrick@
ok kettenis@
|
|
|
|
all the local ones to ``nticks''.
ok stefan@, deraadt@
|
|
|
|
|
|
This will allow us to use some of the DDB macros on trapframe which are
not DDB_REGS.
|
|
|
|
|
|
uncached. To be used in the drm code.
ok mpi@
|
|
comments
ok millert@
|
|
Same diff as guenther@ committed for alpha.
ok guenther@
|
|
|
|
pv lists with a mutex. This should make pmap_enter(9), pmap_remove(9) and
pmap_page_protect(9) safe to use without holding the kernel lock.
ok visa@, mpi@, deraadt@
|
|
* pool_allocator_single: single page allocator, always interrupt safe
* pool_allocator_multi: multi-page allocator, interrupt safe
* pool_allocator_multi_ni: multi-page allocator, not interrupt-safe
ok deraadt@, dlg@
|
|
|
|
segement size constraint.
Make xhci(4) work on my G5.
ok dlg@
|
|
symbols in ASM
ok deraadt@ mpi@
|
|
ok deraadt@
|
|
support on powerpc as it uses a non-executable GOT and PLT.
"start slamming stuff in" deraadt@
|
|
there's no real functional advantage to this, except that it will
make it easier to add deadlock detection to the code.
this is modelled on the c mutex implementation thats on alpha,
mips64, and hppa.
ok mpi@ kettenis@
|
|
faultbuf. But 1/ sr was only restored for machine check exceptions, and 2/ the
way it was saved was unsafe if interrupts were enabled, and could cause %r2
to be lost.
Discussing this with deraadt@ at the end of c2k15, this was probably needed
for the old VI boards which were the target of the original powerpc port,
came with a worse-than-Genesi openfirmware. Since then, machine check
exceptions have been unheard of; or, if they happen, they do not need the
status register to be restored.
ok mpi@ deraadt@
|
|
machines work again with the unlocked reaper.
ok mpi@, deraadt@
no objection from miod@
|
|
|
|
"shared reference pointers".
srp allows concurrent access to a data structure by multiple cpus
while avoiding interlocking cpu opcodes. it manages its own reference
counts and the garbage collection of those data structure to avoid
use after frees.
internally srp is a twisted version of hazard pointers, which are
a relative of RCU.
jmatthew wrote the bulk of a hazard pointer implementation and
changed bpf to use it to allow mpsafe access to bpfilters. however,
at s2k15 we were trying to apply it to other data structures but
the memory overhead of every hazard pointer would have blown out
significantly in several uses cases. a bulk of our time at s2k15
was spent reworking hazard pointers into srp.
this diff adds the srp api and adds the necessary metadata to struct
cpuinfo on our MP architectures. srp on uniprocessor platforms has
alternate code that is optimised because it knows there'll be no
concurrent access to data by multiple cpus.
srp is made available to the system via param.h, so it should be
available everywhere in the kernel.
the docs likely need improvement cos im too close to the implementation.
ok mpi@
|
|
ok mpi@
|
|
|
|
ok mpi@
|
|
|
|
|
|
reused by a CPU while another CPU is manipulating it.
This races occurs because the virtual spill handlers are run without
taking the KERNEL_LOCK for obvious reasons. So use a per-pmap mutex
that CPUs must hold when modifying a pted in order to guarantee the
atomicity of operations *and* the coherence between pmap VPs tree and
what's in the HASH.
Thanks to dlg@ for assisting me debugging this. This change ends your
PowerPC pmap SMP show of the week. GENERIC.MP on macppc should now be
stable enough to build ports without corrupting its own memory.
ok kettenis@, deraadt@, dlg@
|
|
Needed for upcoming locking.
|
|
Since this lock is recursive we can now guarantee the atomicity of
pte_inser{32,64}() when a pted has to be removed first. This fixes
one of the races.
Using a __mp_lock here also allowed dlg@ to provide me useful traces
to fix the next race. Thanks for your help!
ok kettenis@, deraadt@, dlg@
|
|
it. This will reduce the number of places to audit for locking.
Note that for profiling purposes pte_spill_v() is now marked a __noprof
since per-CPU profiling buffers are not guaranteed to be 1:1 mapped and
cannot be accessed from the real mode fault handler.
ok kettenis@, deraadt@, dlg@
|
|
Document every operation, make sure to call "sync" when appropriate so
that other CPUs see the bit changes and finally grab a lock where it was
missing to grantee atomicity.
ok kettenis@, deraadt@, dlg@
|
|
This should not introduce any behavior change but makes the code easier
to read and later easier to protect. This also brings this pmap closer
to what others do.
Thanks to kettenis@ for spotting a bad typo!
ok kettenis@, deraadt@, dlg@
|
|
If you wonder why pte_insert{32,64}() is not using pmap_hash_remove() if
it finds a conflicting PTE in the HASH, it's because in the current state
trying to grab the same lock a second time would lead to a deadlock.
This is much easier to reproduce on G5 (or G4 with BAT disabled).
ok kettenis@, deraadt@, dlg@
|
|
|
|
|
|
if a PTE is present in the HASH.
Note that atomicity is currently not guaranteed between this check and
the following operations.
ok kettenis@, deraadt@, dlg@
|
|
that does not call pmap_vp_lookup().
Carreful readers would have notice the removal of the bits on the virtual
address with a page mask, this change allows me to find the 13 years old
bug fixed in r1.145.
ok kettenis@, deraadt@, dlg@
|
|
This simplify pmap_remove() & friends by re-using an already fetched PTE
descriptor.
There's currently a race on MP system where one CPU can reuse a pted
while another one is still trying to insert it in the HASH. This commit
starts reducing the number of pmap_vp_lookup() calls to help fix this
race.
ok kettenis@, deraadt@, dlg@
|
|
Even if this change is not strickly needed, because the memory will be
returned to the pool it helped me track the use-after-free.
|