Age | Commit message (Collapse) | Author |
|
wrong and the buffer size is implied by the field attribute instead of the
field length like for normal OpRegion fields. Fixes various laptops
where reading multiple bytes from AML over an i2c bus would overflow
the buffer. Still fixes the Dell Precision 3640.
ok tb@
|
|
This is necessary to do this accounting without the KERNEL_LOCK().
ok mvs@, kettenis@
|
|
ok patrick@
|
|
|
|
|
|
|
|
manipulating them directly in pmap_clear_modify().
ok deraadt@
|
|
ok ratchov@
|
|
The macppc kernel, when running on G5, may get page faults while
executing itself. Because we reorder our kernels, these faults happen
in different places in each kernel. I got unlucky with a bsd.mp where
the function __ppc_lock() crossed a page boundary. The fault handler
recursively called __ppc_lock() and caused my G5 to freeze or hang
very early during boot, while trying to map the framebuffer.
Change the lock to spin while (mpl->mpl_cpu != NULL). Acquire the
lock with a single atomic write, by setting mpl_cpu and leaving
mpl_count at 0. Page faults that recursively call __ppc_lock() and
__ppc_unlock() should now not corrupt the lock.
In case we hold the lock but get a page fault before membar_enter() or
after membar_exit(), the recursive calls now have memory barriers.
Delete some unused functions. In the past, __ppc_lock was __mp_lock,
but today, the only __ppc_lock is PMAP_HASH_LOCK.
ok kettenis@
|
|
powerpc64.
ok deraadt@
|
|
We don't emulate it, so guests that attempt to read it just get #GP
injected anyways.
OK mlarkin@
|
|
At this point the mechanism should closely resemble the powerpc64
save/restore points with one difference. (reload avoidance)
The previous 'aggressive' fpu save code that was (mostly) implemented before
and is present on arm32 and arm64.
There is one piece from that other design that remains, if
pcb->pcb_fpcpu == ci && ci->ci_fpuproc == p
after sleep, this will automatically re-activate the FPU state without
needing to reload it.
To enable this, the pointer pair is not changed on FPU context save
to indicate that the CPU still holds the valid content as long as both
of those pointers are pointing to each other.
Note that if another core steals the FPU conxtex (when we get to SMP)
the pcb->pcb_fpcpu will be another cpu, and from that it will know
to reload the FPU context. Also optimistically enabling this only makes
sense on riscv64 because there is the notion of FPU on and clean. Other
implimentations would need to 'fault on' the FPU enable, but could avoid
the FPU context load if no other processor has run this FPU context and no
other process has use FPU on this core.
ok kettenis@ deraadt@ Prior to a couple of fixes.
(this file was missing from original commit)
|
|
Although the page table cannot prevent reads on write only pages,
the first access in trap() knows what is is. This should be passed
to uvm_fault(). Then regress/sys/kern/fork-exit passes. Copy the
new powerpc64 logic to powerpc.
OK tobhe@ kettenis@ deraadt@
|
|
ok kettenis@
|
|
At this point the mechanism should closely resemble the powerpc64
save/restore points with one difference. (reload avoidance)
The previous 'aggressive' fpu save code that was (mostly) implemented before
and is present on arm32 and arm64.
There is one piece from that other design that remains, if
pcb->pcb_fpcpu == ci && ci->ci_fpuproc == p
after sleep, this will automatically re-activate the FPU state without
needing to reload it.
To enable this, the pointer pair is not changed on FPU context save
to indicate that the CPU still holds the valid content as long as both
of those pointers are pointing to each other.
Note that if another core steals the FPU conxtex (when we get to SMP)
the pcb->pcb_fpcpu will be another cpu, and from that it will know
to reload the FPU context. Also optimistically enabling this only makes
sense on riscv64 because there is the notion of FPU on and clean. Other
implimentations would need to 'fault on' the FPU enable, but could avoid
the FPU context load if no other processor has run this FPU context and no
other process has use FPU on this core.
ok kettenis@ deraadt@ Prior to a couple of fixes.
|
|
|
|
no architecturally defined caches (yet) so there is nothing to set up here.
Gets rid of some more useless XXX.
|
|
|
|
|
|
ok patrick@
|
|
ok deraadt@
|
|
and the same as amd64. The machines have large amounts of memory.
discussed with kettenis@
|
|
resident set size. This replicates what the sysctl code does and fixes
a kernel crash reported by robert@
ok deraadt@
|
|
Page table mappings are frequently created and destroyed in the kernel
address space. Traditionally, these mappings have been marked as
"global" mappings which means that a TLB flush via %cr3 load does not
invalidate them. This is ok as these mappings are the same for all
processes.
With the advent of MELTDOWN, global mappings were disabled for CPUs
that are affected by rogue data cache load (RDCL aka MELTDOWN). To
compensate for this we started using PCID and the kernel got its own
process context identifier. Thus the hardware is allowed to cache
kernel mappings again.
However, a CPU that supports PCID but is _not_ affected by MELTDOWN
(i.e. ARCH_CAPABILTIES.RDCL_NO=1) will now use both: global PTE
mappings and PCID.
This is a problem if range based TLB invalidations are used to update/
flush cached TLBs after a change to the kernel page tables. The reason
is that the invpcid instruction (function 0) that is used to remove the
cached TLBs will not remove global mappings. In the non-PCID case invlpg
is used instead which does remove global mappings. In the MELTDOWN case,
global mappings are not used at all.
The solution is to not use global mappings if PCID is active, as the
latter should already by enough to let the hardware cache kernel address
translations across address space switches and the global flag is not
required.
From Christian Ehrhardt
ok bluhm@ guenther@ mlarkin@
|
|
- MSI support
- Interfaces to route interrupts to specific CPUs
- Proper interrupt barriers
- s/riscv_intr_handler/machine_intr_handler/
ok mlarkin@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
To issue an AT command (AT+QCFG="usbnet",2) to change to MBIM mode.
Tested by Shawn Chiou on rpi4; "of course" deraadt@
|
|
|
|
ok deraadt@ mpi@
|
|
pciecam(4) implementation hidden away in arch/armv7/vexpress.
Unbreaks armv7 kernel builds.
|
|
ok mpi@
|
|
There's already such a barrier in usbd_transfer() code-path, but this
one is called when the frames are queued to the HC ring. The audio
samples are stored in memory by userland later, *after* the frames are
scheduled (but before they are sent on the wire) so a barrier is
needed there. Without this change, the data produced by userland may
stay in the CPU caches and is not "seen" by the HC's DMA engine, in
turn the device plays noise on certain arm64 machines (RPI4, for
instance).
Fix mostly from Luca Castagnini with few tweaks from me. OK patrick@
|
|
with a FENCE.I instruction which does exactly what we need to synchronize
the I-Cache with the D-Cache.
ok mlarkin@, jsg@
|
|
|
|
ok deraadt@, jsg@
|
|
Fragmented frames were never of any practical use to us anyway, given that
our net80211 stack does not (yet?) re-assemble them.
Counter-measure against attacks where an arbitrary packet is injected in a
fragment with attacker-controlled content (via an AP which supports fragments).
See https://papers.mathyvanhoef.com/usenix2021.pdf
Section 6.8 "Treating fragments as full frames"
ok mpi@
|
|
To aid vmx debugging, specify if the error was related to vmresume
or vmlaunch. For vm-entry failures due to failed checks, decode the
errors per the SDM Vol. 3C 26.8.
|
|
The race condition results in vmread errors when disabling interrupt
window exiting. The vmd(8) guest gets an EINVAL response to it's
VMM_IOC_RUN ioctl and aborts, sending the guest to an abrupt end.
Similarly to the recent SVM commit, this changes the vcpu run loop
logic to check for resuming on a different cpu. If so, the VMCS is
loaded onto the new cpu.
Instead of using just a "resume" flag, the real reason (other than cpu
switch) that would require reloading the VMCS is vmm may have cleared
the VMCS before yielding to the scheduler. The "resume" flag is still
used in vmx_enter_guest to toggle between vmlaunch/vmresume calls, but
is no longer the arbiter of if vmm reloads the VMCS or not.
A more subtle race condition still exists related to clearing the VMCS
on the previous cpu, but that's for a future commit.
OK mlarkin@
|
|
the bug has been reported by Sebastien and Olivier Cherrier.
it has turned out the pf_state_key_link_reverse() does not
grab enough references when both state keys (sk and skrev)
are identical. This makes pf to trip assert later, when
references are being dropped:
panic(ffffffff81dfbc8e) at panic+0x11d
__assert(ffffffff81e64b54,ffffffff81e0a6ee,33a,ffffffff81e03b7f)
refcnt_rele(fffffd810bf02458) at refcnt_rele+0x6f
pf_state_key_unref(fffffd810bf023f0) at pf_state_key_unref+0x21
pf_remove_state(fffffd810c0c4578) at pf_remove_state+0x1fa
pf_purge_expired_states(2) at pf_purge_expired_states+0x232
pf_purge(ffffffff82236a30) at pf_purge+0x33
taskq_thread(ffff800000032080) at taskq_thread+0x81
fixed tested by Olivier Cherrier and semarie@
OK semarie@
|
|
ok deraadt@
|
|
|
|
change.
|
|
preparation for sharing PCIe host bridge drivers between arm64 and riscv64.
ok mpi@, mlarkin@, patrick@
|