Age | Commit message (Collapse) | Author |
|
|
|
|
|
The caches are used primarily to reduce contention on uvm_lock_fpageq() during
concurrent page faults. For the moment only uvm_pagealloc() tries to get a
page from the current CPU's cache. So on some architectures the caches are
also used by the pmap layer.
Each cache is composed of two magazines, design is borrowed from jeff bonwick
vmem's paper and the implementation is similar to the one of pool_cache from
dlg@. However there is no depot layer and magazines are refilled directly by
the pmemrange allocator.
Tested by robert@, claudio@ and Laurence Tratt.
ok kettenis@
|
|
flavours: pre-usIII, usIII, and sun4v.
This allows us to get rid of the HORRID_III_HACK define in locore and switch
pre-usIII systems to the older, slightly simpler, code for these routines.
ok claudio@ kettenis@
|
|
unused Skylake AVX-512 MDS handler and increases the ci_mds_tmp array to
64 bytes. With help from guenther@
ok deraadt@, guenther@
|
|
ok bluhm@ jca@
|
|
no functional change, found by smatch warnings
ok miod@ bluhm@
|
|
Syzbot found a race when enabling vmm mode on multiprocessor systems.
Protect the vmm start/stop lifecycle by taking the write lock used
for protecting the status of the vmm device.
Reported-by: syzbot+6ae9cec00bbe45fd7782@syzkaller.appspotmail.com
ok gnezdo@
|
|
Spectre-V4 a few weeks ago. Treat Qualcomm Kryo 400 Silver like Cortex-A55
for Spectre-V2 since that is what is is.
ok jsg@
|
|
ok miod@ guenther@
|
|
files which really need <machine/pte.h> guts.
|
|
ok miod@
|
|
In order to continue work on mmio and other instruction emulation,
vmd(8) needs the ability to inject exceptions (like page faults)
from userland.
Refactor the way events are injected from userland, cleaning up how
hardware (external) interrupts are injected in the process.
ok mlarkin@
|
|
While there, replace inlined NENTRY by actual use of that macro.
ok kettenis@
|
|
ok kettenis@
|
|
fields were (seldom) written to but never used for anything.
ok kettenis@
|
|
This code was #if 0, except for instruction misses where it had been enabled
probably by mistake... and was demapping in the data mmu anyway...
(#include <facepalm.h>)
ok kettenis@
|
|
it had sense in the early days of the sparc64 port, this code has bitrotten
and is getting in the way. Time for a visit to the Attic.
This removes:
- interrupt handling debug code (forcing hz = 1, probably broken since years).
- unused or too invasive DEBUG code which noone will ever use in this state.
- #if 0 code blocks which have been this way since locore.s revision 1.1 and
will never get enabled.
ok kettenis@
|
|
- one macro for the inline pseg_get logic used in various MMU trap handlers.
- one macro for the TSB locking logic in various PTE update routines.
- one macro for the sun4v rwindow content saving.
ok kettenis@
|
|
ok kettenis@
|
|
some failure conditions.
|
|
ok kettenis@
|
|
is indicated by a "dma-noncoherent" property on the bus or device nodes
in the device tree. Set the BUS_DMA_COHERENT flag on the DMA tag for
mainbus(4) and modify the flags based on the presence of "dma-coherent"
and "dma-noncoherent" properties where appropriate.
ok patrick@
|
|
attributes, the "direct map" becomes problematic as it results in
mappings for the same physical memory pages with different cachability
addresses. The RISC-V specification of the "Svpbmt" extension doesn't
outright state that this is "verboten" like on some other
architectures that we support. But it does say that it may result in
access with the wrong attributes. So restrict the use of the direct
map to just mapping the 64MB block that the bootloader loaded us into.
To make this possible map the device tree later like we do on arm64.
This allows us to get rid of some assembly code in locore.S as a bonus!
ok miod@, jca@
|
|
At boot, the powerpc64 kernel was calling
pmap_bootstrap -> pmap_kenter_pa -> mtx_enter(&pmap_hash_lock)
before it did
pmap_init -> mtx_init(&pmap_hash_lock, IPL_HIGH)
Change from mtx_init to MUTEX_INITIALIZER. This allows an option
WITNESS kernel to boot without warning of an uninitialized mutex.
Also change macppc's pmap_hash_lock from __ppc_lock_init to
PPC_LOCK_INITIALIZER, though WITNESS doesn't see this lock.
ok mpi@
|
|
make sure only one of them is prototyped and only one of them is implemented.
ok mpi@ kettenis@
|
|
level and a numeric mapping of the cpu vendor, both from CPUID(0).
Convert the general use of strcmp(cpu_vendor) to simple numeric
tests of ci_vendor. Track the minimum of all ci_cpuid_level in the
cpuid_level global and continue to use that for what we vmm exposes.
AMD testing help matthieu@ krw@
ok miod@ deraadt@ cheloha@
|
|
of later enhancements, removing the save/restore of flags, selectors,
and MSRs: flags are caller-saved and don't need restoring while
selectors and MSRs are auto-restored. The FSBASE, GSBASE, and
KERNELGSBASE MSRs just need the correct values set with vmwrite()
in the "on new CPU?" block of vcpu_run_vmx().
Also, only rdmsr(MSR_MISC_ENABLE) once in vcpu_reset_regs_vmx(),
give symbolic names to the exit-load MSR slots, eliminate
VMX_NUM_MSR_STORE, and #if 0 the vc_vmx_msr_entry_load_{va,pa} code
and definitions as unused.
ok dv@
|
|
|
|
|
|
|
|
|
|
when sh*t hits the fan; per kettenis@ request and forgotten in previous
cleaning commit.
|
|
|
|
supports them.
ok jca@
|
|
Instead, require all callers to put the right value in the ih_pil field, and
have intr_establish() trust them rather than assigning this field again from
its first argument.
ok claudio@ kettenis@
|
|
order to speed up window spills, rather than doing an inline pmap_extract
(well, pseg_get).
ok claudio@ kettenis@
|
|
ok claudio@ kettenis@
|
|
There is one code path using it in %g2 and another using it in %g7.
There is no reason for them to use different registers, and fixing
this allows the check to be performed a bit earlier.
ok claudio@ kettenis@
|
|
ok claudio@ kettenis@
|
|
to check for missing BIAS.
ok claudio@ kettenis@
|
|
stack. Remove duplicated "panic if uvm_fault() fails and we are in kernel mode"
blocks.
ok claudio@ kettenis@
|
|
code for this was never written and all uses target the running cpu anyway,
so stop pretending it may do things it won't do and drop that cpu argument.
ok claudio@ kettenis@
|
|
ok claudio@ kettenis@
|
|
from this header file.
ok claudio@ kettenis@
|
|
This makes intreg.h locore-friendly - it only contains the MAXINTNUM define
after that.
ok claudio@ kettenis@
|
|
ok claudio@ kettenis@
|
|
- since there are no hardware fpu operation queues on real sparc64 hardware,
don't bother declaring the relevant struct and fields.
- when an fpu instruction needs to be emulated, pass it directly to
fpu_cleanup rather than fake its appearance in the fpu queue. While there,
also pass the ready-to-use union sigval computed in trap() in case a
signal needs to be delivered.
ok claudio@ kettenis@
|
|
There should hopefully be no further faults on this proc causing an fpu
state to be handled, but better play safe than sorry.
ok claudio@ kettenis@
|
|
ok claudio@ kettenis@
|