summaryrefslogtreecommitdiff
path: root/sys/arch
AgeCommit message (Collapse)Author
2023-09-14clockintr: replace CL_RNDSTAT with global variable statclock_is_randomizedScott Soule Cheloha
In order to separate the statclock from the clock interrupt subsystem we need to move all statclock state out into the broader kernel. Start by replacing the CL_RNDSTAT flag with a new global variable, "statclock_is_randomized", in kern_clock.c. Update all clockintr_init() callers to set the boolean instead of passing the flag. Thread: https://marc.info/?l=openbsd-tech&m=169428749720476&w=2
2023-09-12Use IORT ITS nodes to find the right ITS instance to use when establishingJonathan Matthew
interrupts. This makes MSI/MSI-X work on platforms like the Ampere Altra which have an ITS instance for each PCI domain. also tested by cheloha@ ok kettenis@ patrick@
2023-09-12Store ITS ID in struct interrupt_controller so it can be used to look upJonathan Matthew
the right ITS to use when establishing interrupts. ok kettenis@ patrick@
2023-09-12Add an "openbsd,gic-its-id" property to gic-its nodes containing the ITS ID.Jonathan Matthew
ok kettenis@ patrick@
2023-09-11Remove unnecessary <sys/selinfo.h> includes.Vitaliy Makkoveev
ok jsg
2023-09-10load amd patch into a malloc'd region to make it page alignedJonathan Gray
avoids a General-Protection Exception on patch loader wrmsr with A10-5700, TN-A1 00610f01 15-10-01 the alignment requirement is not present on at least Ryzen 5 2600X, PiR-B2 00800f82 17-08-02 problem reported and fix tested by espie@
2023-09-08Clean up old console bootargsKlemens Nanni
7.3 is long gone, you must have new bootloaders and new kernels. Zaps both condition and else block, unindent and merge lines where fit. Feedback OK kettenis Tests OK denis
2023-09-06Remove -mabi=elfv2 option. This is the default for OpenBSD and clang 16Mark Kettenis
generates a (spurious) error about it in certain contexts. This is fixed in later versions (see https://reviews.llvm.org/D156351) but it is easier to just drop the option. ok miod@, jsg@
2023-09-06vmm(4)/vmd(8): include pending interrupt in vm_run_parmams.Dave Voutila
To remove an ioctl(2) from the vcpu thread hotpath in vmd(8), add a flag in the vm_run_params structure to indicate if there's another interrupt pending. This reduces latency in vcpu work related to i/o as we save a trip into the kernel just to flip the interrupt pending flag on or off. Tested by phessler@, mbuhl@, stsp@, and Mischa Peters. ok mlarkin@
2023-09-06revert disabling warnings for zlib on clang >= 15Jonathan Gray
no longer needed with zlib 1.3 ok tb@
2023-09-05vmm(4): switch the APMI CPUID mask to an include maskMike Larkin
dv points out that there are other bits there that imply the existence of other MSRs, so switching this to an include list is a better idea.
2023-09-05Fix touchpad on newer device trees. The *gpio fields moved up one layer.Tobias Heider
The driver will work with both formats for now but we plan to remove the old one in the future. ok kettenis@
2023-09-03vmm(4): Suppress AMD HwPstate visibility to guestsMike Larkin
On newer Ryzen/EPYC, we need to hide the HwPstate CPUID 80000007:EDX field for HwPstate, or guests will try to access the MSRs associated with those, and that will fail with #GP. ok deraadt
2023-09-03Adapt tlb flush calls following arm64/pmap.cJeremie Courreges-Anglas
1. in pmap_enter() no need to call tlb_flush_page() if we don't actually insert a pted 2. all callers of pmap_pte_remove() already call tlb_flush_page() This seems to result in some performance improvement (18mn -> 17mn15) while building libc on a Hifive Unmatched). Also zap whitespace and useless comments to further reduce the diff with arm64/pmap.c ok kettenis@
2023-09-03Inline PTED_* functions and actually use PTED_WIRED()Jeremie Courreges-Anglas
As noted by drahn@ the compiler did inline said functions, but it also provided them as unused symbols. ok miod@ mlarkin@ kettenis@
2023-09-03pmap_page_protect() should not unmap pages after making them readonly.Jeremie Courreges-Anglas
This brings riscv64/pmap.c in line with arm64/pmap.c, original fix by drahn@ ok miod@ kettenis@ mlarkin@
2023-08-30Implement a few more clocks related to the GMAC.Mark Kettenis
ok jsing@
2023-08-30Add support for the upstream Linux device tree bindings. Support for theMark Kettenis
preliminary bindings will be removed in a couple of weeks. ok kevlo@, jsing@, jmatthew@
2023-08-29Remove p_rtime from struct proc and replace it by passing the timespecClaudio Jeker
as argument to the tuagg_locked function. - Remove incorrect use of p_rtime in other parts of the tree. p_rtime was almost always 0 so including it in any sum did not alter the result. - In main() the update of time can be further simplified since at that time only the primary cpu is running. - Add missing nanouptime() call in cpu_hatch() for hppa - Rename tuagg_unlocked to tuagg_locked like it is done in the rest of the tree. OK cheloha@ dlg@
2023-08-29Enable dwiic(4) and axppmic(4).Mark Kettenis
2023-08-26Adapt glxclk(4) for clockintrVisa Hankala
Make glxclk(4) functional again. The MFGPT provides the CPU core an external clock interrupt. This interrupt enables a later change that reduces energy usage when the system is idle. Also, the use of the external clock fixes timekeeping when the core clock frequency is adjusted.
2023-08-23all platforms: separate cpu_initclocks() from cpu_startclock()Scott Soule Cheloha
To give the primary CPU an opportunity to perform clock interrupt preparation in a machine-independent manner we need to separate the "initialization" parts of cpu_initclocks() from the "start the clock interrupt" parts. Currently, cpu_initclocks() does everything all at once, so there is no space for this MI setup. Many platforms have more-or-less already done this separation by implementing a separate routine named "cpu_startclock()". This patch promotes cpu_startclock() from de facto standard to mandatory API. - Prototype cpu_startclock() in sys/systm.h alongside cpu_initclocks(). The separation of responsibility between the two routines is a bit fuzzy but the basic guidelines are as follows: + cpu_initclocks() must initialize hz, stathz, and profhz, and call clockintr_init(). + cpu_startclock() must call clockintr_cpu_init() and start the clock interrupt cycle on the calling CPU. These guidelines will shift in the future, but that's the way things stand as of *this* commit. - In initclocks(): first call cpu_initclocks(), then do MI setup, and last call cpu_startclock(). - On platforms where cpu_startclock() already exists: don't call cpu_startclock() from cpu_initclocks() anymore. - On platforms where cpu_startclock() doesn't yet exist: implement it. Usually this is as simple as dividing cpu_initclocks() in two. Tested on amd64 (i8254, lapic), arm64, i386 (i8254, lapic), macppc, mips64/octeon, and sparc64. Tested on arm/armv7 (agtimer(4)) by phessler@ and jmatthew@. Tested on m88k/luna88k by aoyama@. Tested on powerpc64 by gkoehler@ and mlarkin@. Tested on riscv64 by jmatthew@. Thread: https://marc.info/?l=openbsd-tech&m=169195251322149&w=2
2023-08-22i386: i8254_initclocks: set IPL_MPSAFE for clock/rtc IRQsScott Soule Cheloha
Setting IPL_MPSAFE for the i8254/mc146818 IRQs appeases a KASSERT in apic_intr_establish() and allows the system to boot via the i8254 path. This makes testing changes to the i8254/mc146818 code much easier on modern hardware without mucking with the GENERIC config. We already set IPL_MPSAFE for these IRQs in the equivalent amd64 code. Now, setting IPL_MPSAFE is a lie: the i8254 and mc146818 IRQs are not MP-safe. However, the lie is harmless because we only reach i8254_initclocks() if (a) there is no APIC at all, or (b) we fail to calibrate the local APIC timer. Thread: https://marc.info/?l=openbsd-tech&m=169258915227321&w=2 ok mlarkin@
2023-08-21Remove dead code.Miod Vallat
2023-08-21alpha: stop running an independent schedclock()Scott Soule Cheloha
alpha is the only platform still running an independent schedclock(). Disabling it brings alpha's scheduling behavior into line with that of every other platform. With this patch, all platforms call schedclock() from statclock() at an effective schedhz of ~12.5.
2023-08-21cpu_idle_{enter,leave}() are no-ops in ASM; replace themPhilip Guenther
with no-op macros. ok gkoehler@
2023-08-19Check for powerpc64 cores that fail to startGeorge Koehler
If the core failed to start (because opal_start_cpu didn't return OPAL_SUCCESS), or failed to identify, then don't use the core. Eduardo Pires told ppc@ in April 2023 about a machine that froze at boot; cpu1 had failed to start with error -14 OPAL_WRONG_STATE. See https://marc.info/?l=openbsd-ppc&m=168106893329069&w=2 ok miod@
2023-08-16avoid bios sign msr on intel family < 6Jonathan Gray
the pentium msr list in the sdm does not include it
2023-08-16avoid patch level msr on amd families < 0fhJonathan Gray
Paul de Weerd reported it isn't implemented on ALIX with cpu0: Geode(TM) Integrated Processor by AMD PCS ("AuthenticAMD" 586-class) 499 MHz, 05-0a-02 the earliest amd microcode update files I can find are for family 0fh (K8) ok guenther@
2023-08-16add Intel ARCH_CAP_GDS bitsJonathan Gray
mentioned in https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/technical-documentation/gather-data-sampling.html
2023-08-15drop MSDOSFS from i386 floppyJonathan Gray
sthen mentioned it is out of space. ok deraadt@
2023-08-15Replace a bunch of (1 << 31) with (1U << 31)Miod Vallat
2023-08-14Skip leading dash in kernel boot options instead of complaining it is anMiod Vallat
unknown option character.
2023-08-14Add a copyin32() implementation.Miod Vallat
2023-08-12Fix comments regarding pcb_onfault maintainence. No code change.Miod Vallat
2023-08-11agtimer(4/arm64): call CPU_BUSY_CYCLE() during spin-loopScott Soule Cheloha
For consistency with other delay(9) implementations, agtimer(4/arm64) ought to call CPU_BUSY_CYCLE() as it spins. kettenis@ notes that we could reduce the power consumed in agtimer_delay() by enabling CNTKCTL_EL1.EVNTEN and configuring ENTKCTL_EL1.EVNTI. kettenis@ also notes that Armv8.7 adds FEAT_WFxT, which will, when the feature appears in real hardware, make it even easier to save power in agtimer_delay(). With input from drahn@ and kettenis@. Thread: https://marc.info/?l=openbsd-tech&m=169146193022516&w=2 ok kettenis@
2023-08-10agtimer(4/arm64): agtimer_delay: compute cycle count with 64-bit arithmeticScott Soule Cheloha
Converting from microseconds to timer cycles is much simpler with 64-bit arithmetic. Thread: https://marc.info/?l=openbsd-tech&m=169146193022516&w=2 ok drahn@ kettenis@
2023-08-10Take advantage of the fact that the WFI instruction does continueMark Kettenis
immediately if there is a pending interrupt to fix a potential race in the idle loop. ok guenther@
2023-08-10The Lenovo X13s has broken firmware that makes it impossible to use PAC.Mark Kettenis
But other machines that use the same SoC work just fine. So instead of disabling this feature on all CPUs that implement the architectured algorithm, add an SMBIOS-based check that just disables the feature on these machines. This means we need to attach smbios0 before cpu0, which in turn means attaching efi0 earlier. tested by patrick@
2023-08-09correct platform id mask, it is 3 bits 52:50Jonathan Gray
2023-08-09show x86 cpu patch level in dmesgJonathan Gray
ok guenther@ deraadt@
2023-08-07Revert 1.43 and always make our own mapping of the Mostek chip. Trying toMiod Vallat
reuse the prom mapping here is a bad idea because we alter its writeability and the prom will not always expect this. Repairs powerdown on Tapdole Ultrabook IIe. discussed with and ok kettenis@
2023-08-05cpu_idle_{enter,leave} are no-ops on mips64, so just #definePhilip Guenther
away the calls ok jca@
2023-08-05cpu_idle_{enter,leave} are no-ops on riscv64, so just #definePhilip Guenther
away the calls ok jca@
2023-08-05Inform 8bpp capability on 8bpp framebuffer inKenji Aoyama
WSDISPLAYIO_GETSUPPORTEDDEPTH ioctl. This is needed to use recent updated wsfb(4) driver in 8bpp mode. We can use 1bpp X server on 8bpp framebuffer by 'startx -- -depth 1'. Tested by me.
2023-08-02Revert r1.31 - contrary to what I wrote, scaled versions of ld.d and st.dMiod Vallat
are 64-bit loads and stores and may hit aligned-to-32-bits-but-not-64-bits addresses.
2023-08-01Add (limited) support for setting PPL0 on JH7110.Mark Kettenis
ok jsing@
2023-07-31Mark code parameter of codepatch_replace() constant also on i386.Alexander Bluhm
OK guenther@
2023-07-31Implement audio input source selection.Tobias Heider
from jon at elytron dot openbsd dot amsterdam feedback and ok miod@
2023-07-31On CPUs with eIBRS ("enhanced Indirect Branch Restricted Speculation")Philip Guenther
or IBT enabled the kernel, the hardware should the attacks which retpolines were created to prevent. In those cases, retpolines should be a net negative for security as they are an indirect branch gadget. They're also slower. * use -mretpoline-external-thunk to give us control of the code used for indirect branches * default to using a retpoline as before, but marks it and the other ASM kernel retpolines for code patching * if the CPU has eIBRS, then enable it * if the CPU has eIBRS *or* IBT, then codepatch the three different retpolines to just indirect jumps make clean && make config required after this ok kettenis@