summaryrefslogtreecommitdiff
path: root/sys/arch/powerpc
AgeCommit message (Collapse)Author
2015-10-08Add a per-page flag to indicate that all mappings of that page should beMark Kettenis
uncached. To be used in the drm code. ok mpi@
2015-09-26lint is dead and C99 may be old enough to drive a car: delete LONGLONGPhilip Guenther
comments ok millert@
2015-09-21Fix membar positioning in mtx_enter_try() and (critically!) mtx_leave()Mark Kettenis
Same diff as guenther@ committed for alpha. ok guenther@
2015-09-13intr_barrier(9) for macppc and socppc.Mark Kettenis
2015-09-11Make the powerpc pmap (more) mpsafe by protecting both the pmap itself and theMark Kettenis
pv lists with a mutex. This should make pmap_enter(9), pmap_remove(9) and pmap_page_protect(9) safe to use without holding the kernel lock. ok visa@, mpi@, deraadt@
2015-09-08Give the pool page allocator backends more sensible names. We now have:Mark Kettenis
* pool_allocator_single: single page allocator, always interrupt safe * pool_allocator_multi: multi-page allocator, interrupt safe * pool_allocator_multi_ni: multi-page allocator, not interrupt-safe ok deraadt@, dlg@
2015-09-06size for free()Theo de Raadt
2015-09-01Sync bus_dmamap_load_raw(9) with amd64/i386 in order to respect theMartin Pieuchot
segement size constraint. Make xhci(4) work on my G5. ok dlg@
2015-08-30Provide remaining arch with END() macro for setting calculated size onPhilip Guenther
symbols in ASM ok deraadt@ mpi@
2015-08-29_NLIST_DO_ELF is no longer needed: it's the only optionPhilip Guenther
ok deraadt@
2015-08-23Add support for the Secure-PLT ABI variant. This will give us better W^XMark Kettenis
support on powerpc as it uses a non-executable GOT and PLT. "start slamming stuff in" deraadt@
2015-08-14replace the asm mutexes with a c implementation.David Gwynne
there's no real functional advantage to this, except that it will make it easier to add deadlock detection to the code. this is modelled on the c mutex implementation thats on alpha, mips64, and hppa. ok mpi@ kettenis@
2015-07-2917 years ago, setfault() was modified to save the status register in theMiod Vallat
faultbuf. But 1/ sr was only restored for machine check exceptions, and 2/ the way it was saved was unsafe if interrupts were enabled, and could cause %r2 to be lost. Discussing this with deraadt@ at the end of c2k15, this was probably needed for the old VI boards which were the target of the original powerpc port, came with a worse-than-Genesi openfirmware. Since then, machine check exceptions have been unheard of; or, if they happen, they do not need the status register to be restored. ok mpi@ deraadt@
2015-07-20Make pmap_remove() grab the kernel lock. This is a big hammer but makes MPMark Kettenis
machines work again with the unlocked reaper. ok mpi@, deraadt@ no objection from miod@
2015-07-17remove obsolete INET kernel optionTed Unangst
2015-07-02introduce srp, which according to the manpage i wrote is short forDavid Gwynne
"shared reference pointers". srp allows concurrent access to a data structure by multiple cpus while avoiding interlocking cpu opcodes. it manages its own reference counts and the garbage collection of those data structure to avoid use after frees. internally srp is a twisted version of hazard pointers, which are a relative of RCU. jmatthew wrote the bulk of a hazard pointer implementation and changed bpf to use it to allow mpsafe access to bpfilters. however, at s2k15 we were trying to apply it to other data structures but the memory overhead of every hazard pointer would have blown out significantly in several uses cases. a bulk of our time at s2k15 was spent reworking hazard pointers into srp. this diff adds the srp api and adds the necessary metadata to struct cpuinfo on our MP architectures. srp on uniprocessor platforms has alternate code that is optimised because it knows there'll be no concurrent access to data by multiple cpus. srp is made available to the system via param.h, so it should be available everywhere in the kernel. the docs likely need improvement cos im too close to the implementation. ok mpi@
2015-06-26remove __cpu_cas and use atomic_cas_ulong instead.David Gwynne
ok mpi@
2015-06-26rename the guard #define from _MACHINE_MPLOCK_H_ to _POWERPC_MPLOCK_H_David Gwynne
2015-06-26move the ppc mplock implementation from macppc to powerpc.David Gwynne
ok mpi@
2015-06-24IPL_MPSAFE bits for macppc with openpic(4).Martin Pieuchot
2015-06-05Add bits missed in previous... I suck at cvs.Martin Pieuchot
2015-06-05Finally protect VP lookups to guarantee that a pted won't be freed orMartin Pieuchot
reused by a CPU while another CPU is manipulating it. This races occurs because the virtual spill handlers are run without taking the KERNEL_LOCK for obvious reasons. So use a per-pmap mutex that CPUs must hold when modifying a pted in order to guarantee the atomicity of operations *and* the coherence between pmap VPs tree and what's in the HASH. Thanks to dlg@ for assisting me debugging this. This change ends your PowerPC pmap SMP show of the week. GENERIC.MP on macppc should now be stable enough to build ports without corrupting its own memory. ok kettenis@, deraadt@, dlg@
2015-06-05Don't try to be clever when unrolling the loop in pmap_remove().Martin Pieuchot
Needed for upcoming locking.
2015-06-05Replace the per-entry locks by a global HASH lock.Martin Pieuchot
Since this lock is recursive we can now guarantee the atomicity of pte_inser{32,64}() when a pted has to be removed first. This fixes one of the races. Using a __mp_lock here also allowed dlg@ to provide me useful traces to fix the next race. Thanks for your help! ok kettenis@, deraadt@, dlg@
2015-06-05Call pte_spill_v() from the real mode fault handler instead of rerollingMartin Pieuchot
it. This will reduce the number of places to audit for locking. Note that for profiling purposes pte_spill_v() is now marked a __noprof since per-CPU profiling buffers are not guaranteed to be 1:1 mapped and cannot be accessed from the real mode fault handler. ok kettenis@, deraadt@, dlg@
2015-06-05Rewrite PTE manipulation routines to better match the PEM.Martin Pieuchot
Document every operation, make sure to call "sync" when appropriate so that other CPUs see the bit changes and finally grab a lock where it was missing to grantee atomicity. ok kettenis@, deraadt@, dlg@
2015-06-05Split pteclrbits() into pmap_{test,clear}_attrs().Martin Pieuchot
This should not introduce any behavior change but makes the code easier to read and later easier to protect. This also brings this pmap closer to what others do. Thanks to kettenis@ for spotting a bad typo! ok kettenis@, deraadt@, dlg@
2015-06-05More usages of pmap_ptedinhash().Martin Pieuchot
If you wonder why pte_insert{32,64}() is not using pmap_hash_remove() if it finds a conflicting PTE in the HASH, it's because in the current state trying to grab the same lock a second time would lead to a deadlock. This is much easier to reproduce on G5 (or G4 with BAT disabled). ok kettenis@, deraadt@, dlg@
2015-06-05Remove DEBUG stuff.Martin Pieuchot
2015-06-05Make use of ptesr() instead of rerolling it.Martin Pieuchot
2015-06-05Merge various copies of the same code into a new function to determineMartin Pieuchot
if a PTE is present in the HASH. Note that atomicity is currently not guaranteed between this check and the following operations. ok kettenis@, deraadt@, dlg@
2015-06-05Introduce pmap_pted_ro() a simple wrapper for the 32/64 bits versionsMartin Pieuchot
that does not call pmap_vp_lookup(). Carreful readers would have notice the removal of the bits on the virtual address with a page mask, this change allows me to find the 13 years old bug fixed in r1.145. ok kettenis@, deraadt@, dlg@
2015-06-05Do only one VP lookup when removing a page.Martin Pieuchot
This simplify pmap_remove() & friends by re-using an already fetched PTE descriptor. There's currently a race on MP system where one CPU can reuse a pted while another one is still trying to insert it in the HASH. This commit starts reducing the number of pmap_vp_lookup() calls to help fix this race. ok kettenis@, deraadt@, dlg@
2015-06-05Remove the MANAGED flag when removing a PV entry.Martin Pieuchot
Even if this change is not strickly needed, because the memory will be returned to the pool it helped me track the use-after-free.
2015-06-05Remove unneeded splvm() calls and the pool_setipl(9) hack of r1.140.Martin Pieuchot
By instructing spl(9) calls on MP machines I figured out that their high cost was hiding a race condition involving PTE reuse in our pmap. Thanks to deraadt@ for finding a way to trigger such panic by adding a couple of splvm(). This should make the races easier to trigger but will be addressed shortly. This commit starts your PowerPC pmap SMP show of the week. ok kettenis@, deraadt@, dlg@
2015-05-06put mpi's atomics back in, but with the return value of add (and therefore ↵David Gwynne
sub, inc, and dec) fixed. the asm read the value from memory into a register, added to it, and then tried to write it back. after succeeding it doesnt have to add again before returning. this splits sub, inc, and dec off from add. sub can use the subf opcode, and inc and dec can use the addic opcode. explicitely identify where the modified memory is so we can avoid using "memory" as a clobber. ok mpi@
2015-05-05emul_native is only used for kernel threads which can't dump core, soPhilip Guenther
delete coredump_trad(), uvm_coredump(), cpu_coredump(), struct md_coredump, and various #includes that are superfluous. This leaves compat_linux processes without a coredump callback. If that ability is desired, someone should update it to use coredump_elf32() and verify the results... ok kettenis@
2015-04-30Remove SIZE_MAX from limits.h. It was added years ago before weTodd C. Miller
had a proper stdint.h. No ports fallout. OK guenther@ miod@
2015-04-29Remove a check for NULL that would have been after a NULL dereferenceJonathan Gray
if callers of save_vec() weren't expected to pass a non NULL pointer as an argument. ok kettenis@
2015-04-27Correctly write the 64bits of the HID 1, 4 and 5 registers.Martin Pieuchot
This makes the secondary cpu of my PowerMac as fast as the primary one, and divide the build time by 3 with a GENERIC.MP kernel on MP G5s Found thanks to MP kernel profiling. ok dlg@, miod@
2015-04-24Revert back to using GCC builtins. This code triggers an off by one inMartin Pieuchot
device_unref() as found by deraadt@.
2015-04-23Fix 13 years old typo that should be responsible for the unhappinessMartin Pieuchot
of UVM on PowerPC architectures by breaking pmap_is_referenced() and friends. ok kettenis@
2015-04-22Implement the MI atomic API for PowerPC to avoid using gcc builtins thatMartin Pieuchot
include extra sync operations. ok kettenis@
2015-04-21The ELF psABI for PPC specifies that the stack shall always be 16-byte aligned.Philip Guenther
ok mpi@ deraadt@
2015-03-31Make it possisble to disable block address translation mechanism onMartin Pieuchot
processors that support it. Due to the way trap code is patched it is currently not possible to enabled/disable BAT at runtime. ok miod@, kettenis@
2015-03-31Merge two versions of ppc_check_procid().Martin Pieuchot
ok miod@, kettenis@ as part of a larger diff
2015-02-15Change pmap_remove_holes() to take a vmspace instead of a map as its argument.Miod Vallat
Use this on vax to correctly pick the end of the stack area now that the stackgap adjustment code will no longer guarantee it is a fixed location.
2015-02-11no md code wants lockmgr locks, so no md code needs to include sys/lock.hDavid Gwynne
with and ok miod@
2015-02-11make the rwlock implementation MI.David Gwynne
each arch used to have to provide an rw_cas operation, but now we have the rwlock code build its own version. on smp machines it uses atomic_cas_ulong. on uniproc machines it avoids interlocked instructions by using straight loads and stores. this is safe because rwlocks are only used from process context and processes are currently not preemptible in our kernel. so alpha/ppc/etc might get a benefit. ok miod@ kettenis@ deraadt@
2015-02-09oops, accidental commitTheo de Raadt