Age | Commit message (Collapse) | Author |
|
a function, so that it does not get reloaded from cr17 every time.
|
|
a single ci_flags bitfield.
Also, set_cpu_number() will no longer set CIF_PRIMARY on the primary processor,
it's up to the initialization code to do this.
|
|
data_access_emulation() to complete the interrupted pipeline operations,
as data_access_emulation() can fault in turn.
|
|
enabled when the exception occured. This should not happen in practice,
but better be safe than sorry.
|
|
|
|
|
|
cpu bitmask of the pmap.
|
|
possible.
|
|
usermode, since curproc can not be NULL outside the kernel.
|
|
processing soft interrupts; and there was much rejoicing.
|
|
global; and only schedule software interrupts on the currently running cpu.
|
|
because of the global lock. It will get enabled again when locking work
progresses.
|
|
the cpu which runs it is accounted correctly in MP kernels.
|
|
stack a startup stack.
|
|
affected processors if option MULTIPROCESSOR. It's amazing bsd.mp could
boot multiuser without this.
|
|
and put the process it was running back on the run queue (unless this was
the idle proc).
|
|
|
|
lock with KERNEL_LOCK, not KERNEL_PROC_LOCK. This lets bsd.mp run multiuser
on a single-processor board.
|
|
|
|
|
|
of a local __cpu_simple_lock_t (which is volatile), so that the compiler can
optimize it to a register, instead of using a memory location (and doing
stores into it when __cpu_simple_lock() is spinning).
This makes the MP code a bit smaller and a bit faster.
|
|
xmem instruction does it for us.
|
|
and display incorrect opcode encodings as invalid opcodes.
|
|
context switches. Should have been commited ages ago (when pcb alignment
was fixed). No functional change.
|
|
|
|
extension.
|
|
get rid of the ``switchframe'' struct definition. As a bonus, this makes
cpu_fork() simpler and unwastes 8 bytes of u area.
|
|
MULTIPROCESSOR case in mtx_enter.
|
|
that proc_trampoline has been fixed.
|
|
|
|
|
|
- Move the functionality of choosing a process from cpu_switch into
a much simpler function: cpu_switchto. Instead of having the locore
code walk the run queues, let the MI code choose the process we
want to run and only implement the context switching itself in MD
code.
- Let MD context switching run without worrying about spls or locks.
- Instead of having the idle loop implemented with special contexts
in MD code, implement one idle proc for each cpu. make the idle
loop MI with MD hooks.
- Change the proc lists from the old style vax queues to TAILQs.
- Change the sleep queue from vax queues to TAILQs. This makes
wakeup() go from O(n^2) to O(n)
there will be some MD fallout, but it will be fixed shortly.
There's also a few cleanups to be done after this.
deraadt@, kettenis@ ok
|
|
the holes a MMU may have from a given vm_map. This will be automagically
invoked for newly created vmspaces.
On platforms with MMU holes (e.g. sun4, sun4c and vax), this prevents
mmap(2) hints which would end up being in the hole to be accepted as valid,
causing unexpected signals when the process tries to access the hole
(since pmap can not fill the hole anyway).
Unfortunately, the logic mmap() uses to pick a valid address for anonymous
mappings needs work, as it will only try to find an address higher than the
hint, which causes all mmap() with a hint in the hole to fail on vax. This
will be improved later.
|
|
Even if the latter would end up removing the mappings by itself, it would
do so using pmap_remove() because phys_map is not intrsafe; but some
platforms use pmap_kenter_pa() in vmapbuf(). By removing the mappings
ourselves, we can ensure the remove function used matches the enter function
which has been used.
Discussed and theoretical ok art@
|
|
This changes nothing on legacy architectures, but is a bit faster (and simpler)
on the interesting ones.
|
|
mbuf constants from MD param.h to MI param.h.
Besides being the same on every arch, things will
most probly break if any arch has different values
then the others.
The NMBCLUSTERS constants needs to be MD though;
ok miod@,krw@,claudio@
|
|
|
|
number, we should test for 10, not 2, as the revision for which the xxx.usr
errata applies; also, going through the errata, revision 2/10 (1010x) _is_
affected.
|
|
similar to the fix which went into i386 and amd64 a few weeks ago.
|
|
|
|
|
|
jumping to it. No functional change.
|
|
level we were at.
|
|
|
|
splassert_ctl > 0 in __predict_false().
ok deraadt@
|
|
have cpu_info now, so kill the option.
eyeballed by jsg@ and grange@
|
|
option. Every architecture implements mutexes now.
|
|
send clock ticks to secondary processors.
|
|
|
|
as done for DAEs not occuring during interrupts.
Remove the check for unprocessed DAE on return from trap() in eh_common.S,
since this can't happen. As a result, the return-from-trap code becomes
identical on 88100 and 88110 systems.
|