Age | Commit message (Collapse) | Author |
|
|
|
it defines. In some cases, this means pulling in uvm.h or pcb.h
instead, but most of the inclusions were just noise. Tested on
alpha, amd64, armish, hppa, i386, macpcc, sgi, sparc64, and vax,
mostly by krw and naddy.
ok krw@
|
|
This is a leftover from a very old workaround for a very old and long gone
pmap_enter() bug.
|
|
on 88110 designs. Brings a ~8% speedup on GENERIC.MP on 197DP.
|
|
ok miod@
|
|
On AViiON systems with the 6:1 CMMU:CPU configuration, force cached
mappings to be writethrough - this probably hides a bug in the code, but
that's the only way so far to get such a system running stably.
|
|
|
|
|
|
|
|
Also features support for {awkw,bast}ard 6:1 CMMU:CPU configurations (4I2D).
Tested on model 4605, which runs up to cpu_initclocks(), which is not written
for this system family yet. No regression on model 4300.
|
|
ok miod@
|
|
rather than defining it separately for each architecture.
Also set it to 4, to accommodate for future UTF-8 support (rfc3629).
Diff by stsp, committing to catch the libc major bump
ok kettenis@, guenther@
|
|
ok jsing@, miod@
|
|
processor, since caches are physically addressed and we are working on physical
addresses.
|
|
Ok oga@, "the time is now" deraadt@.
|
|
currently active userland pmap in each processors struct cpu_info.
This thus skips the complete tlb flush if idle switches back to
the proc previously running on this processor.
|
|
landisk, and the sparc implementation is obviously wrong. That's where I
stopped looking, so who knows what else was broken. A simple comparison of
the existing mtx_enter with the new mtx_enter_try would have told anybody.
|
|
using this soon(ish). Ok oga@, sorta yes kettenis@.
|
|
MULTIPROCESSOR.
|
|
MD code would free resources that couldn't be freed until we were no
longer running in that processor. However, it's is unused on all
architectures since mikeb@'s tss changes on x86 earlier in the year.
ok miod@
|
|
with different locking mechanism. 88110 soft ipi are replaced with an
ipi callback which is checked upon return from exception (it can not be kept
as a softintr, as the generic softinterrupt code doesn't have per-cpu
pending softintr queues).
|
|
levels. This will allow for platforms where soft interrupt levels do not
map to real hardware interrupt levels to have soft ipl values overlapping
hard ipl values without breaking spl asserts.
|
|
flushing the whole TLB block every time a pte is modified, store a bitmask
of pending flushes and do them at pmap_update() time. 88100 behaviour is
unchanged.
|
|
exchange with zero; use it in the soft interrupt code to make it simpler
and faster.
|
|
|
|
Rework nmi handling to handle ``complex'' NMI faster, and return as fast as
possible from the exception, without doing the AST and softintr dance.
This should avoid too much stack usage under load.
ok deraadt@
|
|
to disable NMI sources in addition to interrupt sources, and we can not
use a quick sequence with shadowing frozen as done for atomic ops.
This lets GENERIC.MP boot multiuser on MVME197DP boards, and is so far stable
enough to be able to recompile a kernel from scratch (with make -j2).
|
|
since it was intended to service NMI occuring in user mode, and we could
end up invoking preempt() and have another cpu start using this stack,
with interesting results.
|
|
|
|
can be interrupted by NMI; move the SMP version of these routines from
inlines to a separate file (kernel text shrinks 20KB...).
Since the implementation for 88110 becomes really hairy, the pre-main() code
is responsible for copying the appropriate code over for kernels configured
for both 88100 and 88110 cpus, to avoid having to choose the atomicity
strategy at runtime. Hairy, I said.
This gets GENERIC.MP run much further on 197DP. Not enough to reach multiuser
mode, but boots up to starting sshd and then panics.
|
|
xmem didn't return the expected value, spin doing regular loads until it
appears we have a chance to grab the lock again.
|
|
|
|
synchronizes the pipeline on 88110.
|
|
- dma_cachectl() split into a ``local cpu only'' and ``all cpus'', and an ipi
to broadcast ``local dma_cachectl'' is added.
- cpu_info fields are rearranged, to have the 88100-specific information
and the 88110-specific information overlap, and has many more 88110
ugly things.
- more ipi handling in the 197-specific area. Since it is not possible to
have the second processor receive any hardware interrupt (selection
is done on a level basis via ISEL, and we definitely do not want the
main cpu to lose interrupts), the best we can do is to inflict ourselves
a soft interrupt for late ipi processing. It gets used for softclock and
hardclock on the secondary processor, but since the soft interrupt
dispatcher doesn't have an exception frame, we have to remember parts
of it to build a fake clockframe from the soft ipi handler (ugly but
works).
This now lets GENERIC.MP run a few userland binaries before bugs trigger.
|
|
from interrupt() and related function pointers.
|
|
now set up both the exception frame structure and the exception stack as
soon as possible, so that we can safely get interrupted by an NMI as soon
as we reenable shadowing.
|
|
this defeats the purpose of having a separate stack at this point... Oopsie
|
|
different from regular hardware interrupts to be worth handling the
same way.
Disable IPI reception while we are handling pending IPIs. And do not
reenable them by mistake if we need to send an IPI in return.
This lets GENERIC.MP boot single user on a MVME197DP. There are still
many bugs to fix.
|
|
has been invoked on the new process.
|
|
while we are switching pcbs and all sort of bad things could happen.
|
|
pmap makes sure these can't happen.
|
|
the old vs(4) code is gone.
|
|
|
|
(i.e. with the valid bit set in them). Found the hard way by Anders Gavare
trying his latest gxemul, proves the hardware is more permitting than one
would expect it to be...
|
|
even in non-MP kernels, to avoid unnecessary tlb flushes later when
pmap operates on shared pages.
|
|
other MP platforms.
|
|
protected by __ISO_C_VISIBLE > 1999. With a little help from miod@.
ok miod@
|
|
which are uniform for the profclock on each cpu in a SMP system (but using
a different seed for each cpu). on all cpus, avoid seeding with a value out
of the [0, 2^31-1] range (since that is not stable)
ok kettenis drahn
|
|
anything special to prod a cpu to leave the idle loop in signotify.
powerpc, i386, amd64 and sparc64 will follow soon so that everyone has
the same interface to wake an idling cpu.
|
|
For now, sparc64 is arbitrarily set to 256 (only architecture that didn't have
a practical limit in the code on the number of cpus).
|