Age | Commit message (Collapse) | Author |
|
|
|
|
|
and convert all gets() users.
ok deraadt@
|
|
a.out world.
ok deraadt@ kettenis@
|
|
as well. OK dlg@ mpi@
|
|
ok tedu@, deraadt@, miod@
|
|
|
|
|
|
LABELOFFSET and MAXPARTITIONS. Easier on the eye when scanning
through all these files. No functional change.
|
|
slightly different pattern. hppa/macppc compile and boot so
hppa64/aviion surely do too!
ok deraadt@
|
|
and boots, so the identical hppa64 should too!
ok deraadt@
|
|
disklabel processing. Especially when the 2nd one was not asking for a
disk sector worth of buffer space.
ok kettenis@
|
|
comments
ok millert@
|
|
Discssed with geunther@, tested by landry@
|
|
|
|
|
|
|
|
|
|
|
|
PR_WAITOK flag to pmap_init and pass NULL as the pool allocator.
|
|
ok deraadt@
|
|
ok miod@
found by Maxime Villard / Brainy Code Scanner. thanks.
|
|
problem noted by landry@
ok dlg@
|
|
|
|
|
|
|
|
with a single KASSERT that checks whether the RB tree is empty. Seems uvm
was fixed some time ago and no longer leaves mappings behind.
ok deraadt@
|
|
"shared reference pointers".
srp allows concurrent access to a data structure by multiple cpus
while avoiding interlocking cpu opcodes. it manages its own reference
counts and the garbage collection of those data structure to avoid
use after frees.
internally srp is a twisted version of hazard pointers, which are
a relative of RCU.
jmatthew wrote the bulk of a hazard pointer implementation and
changed bpf to use it to allow mpsafe access to bpfilters. however,
at s2k15 we were trying to apply it to other data structures but
the memory overhead of every hazard pointer would have blown out
significantly in several uses cases. a bulk of our time at s2k15
was spent reworking hazard pointers into srp.
this diff adds the srp api and adds the necessary metadata to struct
cpuinfo on our MP architectures. srp on uniprocessor platforms has
alternate code that is optimised because it knows there'll be no
concurrent access to data by multiple cpus.
srp is made available to the system via param.h, so it should be
available everywhere in the kernel.
the docs likely need improvement cos im too close to the implementation.
ok mpi@
|
|
from tilo stritzky
thanks miod for help with the diff, and who also noted that
leading whitespace gets stripped too;
|
|
interrupts at pckbc attach time, and get rid of the `intr_establish'
pckbc callback.
Tested on hppa (gsckbc) and sgi (pckbc@hpc); not tested on sparc64 (pckbc@ebus)
but this attachment was already behaving this way and its intr_establish
callback was an empty function.
|
|
the kernel_lock), as we already do better conversions in
user-mode. Yet, no need for every single driver to fiddle with the
conversion code as they are done transparently by common MI code. With
help from armani and miod, support from mpi
ok armani@
|
|
ok miod@
|
|
delete coredump_trad(), uvm_coredump(), cpu_coredump(), struct md_coredump,
and various #includes that are superfluous.
This leaves compat_linux processes without a coredump callback. If that
ability is desired, someone should update it to use coredump_elf32() and
verify the results...
ok kettenis@
|
|
this is largely based on src/sys/arch/alpha/alpha/mutex.c r1.14 and
src/sys/arch/sgi/sgi/mutex.c r1.15
always and explicitely record which cpu owns the lock (or NULL if
noone owns it). improve the mutex diagnostics/asserts so they operate
on the mtx_owner field rather than mtx_lock. previously the asserts
would assume the lock cpu owns the lock if any of them own the lock,
which blows up badly.
hppa hasnt got good atomic cpu opcodes, so this still relies on
ldcws to serialise access to the lock.
while im here i also shuffled the code. on MULTIPROCESSOR systems
instead of duplicating code between mtx_enter and mtx_enter_try,
mtx_enter simply loops on mtx_enter_try until it succeeds.
this also provides an alternative implementation of mutexes on
!MULTIPROCESSOR systems that avoids interlocking opcodes. mutexes
wont contend on UP boxes, theyre basically wrappers around spls.
we can just do the splraise, stash the owner as a guard value for
DIAGNOSTIC and return. similarly, mtx_enter_try on UP will never
fail, so we can just call mtx_enter and return 1.
tested by and ok kettenis@ jsing@
|
|
had a proper stdint.h. No ports fallout. OK guenther@ miod@
|
|
these days is incompatible with that practice and leads to deadlocks.
ok jsing@
|
|
unification...
|
|
Use this on vax to correctly pick the end of the stack area now that the
stackgap adjustment code will no longer guarantee it is a fixed location.
|
|
|
|
|
|
|
|
|
|
each arch used to have to provide an rw_cas operation, but now we
have the rwlock code build its own version. on smp machines it uses
atomic_cas_ulong. on uniproc machines it avoids interlocked
instructions by using straight loads and stores. this is safe because
rwlocks are only used from process context and processes are currently
not preemptible in our kernel. so alpha/ppc/etc might get a benefit.
ok miod@ kettenis@ deraadt@
|
|
ok guenther@
|
|
- rename uiomove() to uiomovei() and update all its users.
- introduce uiomove(), which is similar to uiomovei() but with a size_t.
- rewrite uiomovei() as an uiomove() wrapper.
ok kettenis@
|
|
not necessary, but consistent with other platforms. ok deraadt
|
|
|
|
|
|
per-process value, and therefpore turns the VM_PSSTRINGS sysctl into a
per-process one as well. This gets rid of a pointer to the bottom of the
stack at a fixed location. Also clears the road for unmapping the stackgap.
ok deraadt@
|
|
|