summaryrefslogtreecommitdiff
path: root/sys/arch/m88k
AgeCommit message (Collapse)Author
2007-11-14Cache curcpu() value into a local variable when it is used more than once inMiod Vallat
a function, so that it does not get reloaded from cr17 every time.
2007-11-14Merge the ci_alive and ci_primary boolean values of struct cpu_info intoMiod Vallat
a single ci_flags bitfield. Also, set_cpu_number() will no longer set CIF_PRIMARY on the primary processor, it's up to the initialization code to do this.
2007-11-14When processing a data access fault, keep the kernel lock while invokingMiod Vallat
data_access_emulation() to complete the interrupted pipeline operations, as data_access_emulation() can fault in turn.
2007-11-14When servicing an exception, do not enable interrupts if they were notMiod Vallat
enabled when the exception occured. This should not happen in practice, but better be safe than sorry.
2007-11-14Let ``machine cpu #'' hop to the given cpu.Miod Vallat
2007-11-11Give more information in ``machine cpu'' under ddb.Miod Vallat
2007-11-11In dma_cachectl(), flush unconditionnaly on all processors, regardless of theMiod Vallat
cpu bitmask of the pmap.
2007-11-09In dma_cachectl*(), try and perform fewer remote processor operations wheneverMiod Vallat
possible.
2007-11-09Do not bother checking for curproc != NULL if we know a trap comes fromMiod Vallat
usermode, since curproc can not be NULL outside the kernel.
2007-11-09On MULTIPROCESSOR kernels, don't forget to grab the kernel lock whenMiod Vallat
processing soft interrupts; and there was much rejoicing.
2007-11-06Keep a pending software interrupts mask per processor, instead of having itMiod Vallat
global; and only schedule software interrupts on the currently running cpu.
2007-11-06Comment out the pmap fine grained locking stuff, it is not necessary for nowMiod Vallat
because of the global lock. It will get enabled again when locking work progresses.
2007-11-06Be sure to pmap_deactivate() a process during context switches, so thatMiod Vallat
the cpu which runs it is accounted correctly in MP kernels.
2007-11-06Remove the now unused idle_u, and call the secondary processors startupMiod Vallat
stack a startup stack.
2007-10-29Make sure the dma_cachectl*() functions actually do their work on allMiod Vallat
affected processors if option MULTIPROCESSOR. It's amazing bsd.mp could boot multiuser without this.
2007-10-29When a secondary cpu gets its interrupt pin stuck, be sure to savectxMiod Vallat
and put the process it was running back on the run queue (unless this was the idle proc).
2007-10-28Do not flag a processor as ``alive'' until it really is ready to accept IPIs.Miod Vallat
2007-10-28When handling a userland data fault occuring in kernel mode, take the kernelMiod Vallat
lock with KERNEL_LOCK, not KERNEL_PROC_LOCK. This lets bsd.mp run multiuser on a single-processor board.
2007-10-28Disable interrupts around changing curproc and curpcb so these always match.Miod Vallat
2007-10-27Use the same assembly constraints for all inline assembler xmem constructs.Miod Vallat
2007-10-27In __cpu_simple_lock() and __cpu_simple_lock_try(), use a local u_int insteadMiod Vallat
of a local __cpu_simple_lock_t (which is volatile), so that the compiler can optimize it to a register, instead of using a memory location (and doing stores into it when __cpu_simple_lock() is spinning). This makes the MP code a bit smaller and a bit faster.
2007-10-27No need for an explicit pipeline synchronization in invalidate_pte(), theMiod Vallat
xmem instruction does it for us.
2007-10-27Be more strict when disassembling {f,}{st,x}cr and [bt]cnd instructions,Miod Vallat
and display incorrect opcode encodings as invalid opcodes.
2007-10-24Rely on 16 byte pcb alignment, and use double loads and stores duringMiod Vallat
context switches. Should have been commited ages ago (when pcb alignment was fixed). No functional change.
2007-10-24Remove sir_lock, superseded by the atomic bit operations.Miod Vallat
2007-10-24Turn curcpu() into an inline function instead of a macro relying on a GCCMiod Vallat
extension.
2007-10-16Do not expose the end of the proc_trampoline bowels to C code anymore, andMiod Vallat
get rid of the ``switchframe'' struct definition. As a bonus, this makes cpu_fork() simpler and unwastes 8 bytes of u area.
2007-10-16Fix the mtx_wantipl != IPL_NONE comparison in the ``have to spin''Miod Vallat
MULTIPROCESSOR case in mtx_enter.
2007-10-13It is no longer necessary to fiddle with spl in cpu_idle_{enter,leave} nowMiod Vallat
that proc_trampoline has been fixed.
2007-10-13Be sure to spl0() in proc_trampoline, so that kernel threads start at IPL_NONE.Miod Vallat
2007-10-13Do not splhigh() before invoking sched_exit(), sched_exit() will do it better.Miod Vallat
2007-10-10Make context switching much more MI:Artur Grabowski
- Move the functionality of choosing a process from cpu_switch into a much simpler function: cpu_switchto. Instead of having the locore code walk the run queues, let the MI code choose the process we want to run and only implement the context switching itself in MD code. - Let MD context switching run without worrying about spls or locks. - Instead of having the idle loop implemented with special contexts in MD code, implement one idle proc for each cpu. make the idle loop MI with MD hooks. - Change the proc lists from the old style vax queues to TAILQs. - Change the sleep queue from vax queues to TAILQs. This makes wakeup() go from O(n^2) to O(n) there will be some MD fallout, but it will be fixed shortly. There's also a few cleanups to be done after this. deraadt@, kettenis@ ok
2007-09-10Introduce a md pmap hook, pmap_remove_holes(), which is supposed to markMiod Vallat
the holes a MMU may have from a given vm_map. This will be automagically invoked for newly created vmspaces. On platforms with MMU holes (e.g. sun4, sun4c and vax), this prevents mmap(2) hints which would end up being in the hole to be accepted as valid, causing unexpected signals when the process tries to access the hole (since pmap can not fill the hole anyway). Unfortunately, the logic mmap() uses to pick a valid address for anonymous mappings needs work, as it will only try to find an address higher than the hint, which causes all mmap() with a hint in the hole to fail on vax. This will be improved later.
2007-06-20In vunmapbuf(), explicitely remove mappings before invoking uvm_km_free().Miod Vallat
Even if the latter would end up removing the mappings by itself, it would do so using pmap_remove() because phys_map is not intrsafe; but some platforms use pmap_kenter_pa() in vmapbuf(). By removing the mappings ourselves, we can ensure the remove function used matches the enter function which has been used. Discussed and theoretical ok art@
2007-05-29Use atomic operations to operate on netisr, instead of clearing it at splhigh.Miod Vallat
This changes nothing on legacy architectures, but is a bit faster (and simpler) on the interesting ones.
2007-05-28Move the MSIZE, MCLSHIFT, MCLBYTES and the MCLOFSETThordur I. Bjornsson
mbuf constants from MD param.h to MI param.h. Besides being the same on every arch, things will most probly break if any arch has different values then the others. The NMBCLUSTERS constants needs to be MD though; ok miod@,krw@,claudio@
2007-05-27pagemove() is no longer used.Miod Vallat
2007-05-20Since we no longer use 3 bits but the whole 7 to get the processor revisionMiod Vallat
number, we should test for 10, not 2, as the revision for which the xxx.usr errata applies; also, going through the errata, revision 2/10 (1010x) _is_ affected.
2007-05-19Send an IPI in signotify() if the process runs on a different processor,Miod Vallat
similar to the fix which went into i386 and amd64 a few weeks ago.
2007-05-19Force other processors to spin when one is in ddb.Miod Vallat
2007-05-19Simpler asm constraints for simplelock operations.Miod Vallat
2007-05-18Move proc_do_uret() around so that it can fall through no_ast instead ofMiod Vallat
jumping to it. No functional change.
2007-05-18In spl0(), really process soft interrupts at IPL_SOFT instead of whateverMiod Vallat
level we were at.
2007-05-18Revert previous revision, and do it again correctly.Miod Vallat
2007-05-16splassert_ctl defaults to 1 now, so dont wrap the checks forThordur I. Bjornsson
splassert_ctl > 0 in __predict_false(). ok deraadt@
2007-05-16The world of __HAVEs and __HAVE_NOTs is reducing. All architecturesArtur Grabowski
have cpu_info now, so kill the option. eyeballed by jsg@ and grange@
2007-05-15Remove the MI implementation of mutexes and remove the __HAVE_MUTEXArtur Grabowski
option. Every architecture implements mutexes now.
2007-05-14Work in progress IPI mechanism, currently only implemented on MVME188, toMiod Vallat
send clock ticks to secondary processors.
2007-05-14Oops, correctly handle spl-less mutexes.Miod Vallat
2007-05-12Change the 88100 interrupt handlers to process DAEs with interrupts enabled,Miod Vallat
as done for DAEs not occuring during interrupts. Remove the check for unprocessed DAE on return from trap() in eh_common.S, since this can't happen. As a result, the return-from-trap code becomes identical on 88100 and 88110 systems.