diff options
author | Mark Kettenis <kettenis@cvs.openbsd.org> | 2023-12-11 22:12:54 +0000 |
---|---|---|
committer | Mark Kettenis <kettenis@cvs.openbsd.org> | 2023-12-11 22:12:54 +0000 |
commit | 0de6004f3c4b6c0d9c50014c604793d658cd1421 (patch) | |
tree | d7228f737247d13c5e1dcc0b85ebbfebd24fcf8e /sys/arch/arm64/include | |
parent | 70e28086c36f6c95d45010d325ff4280b90c00be (diff) |
Implement per-CPU caching for the page table page (vp) pool and the PTE
descriptor (pted) pool in the arm64 pmap implementation. This
significantly reduces the side-effects of lock contention on the kernel
map lock that is (incorrectly) translated into excessive page daemon
wakeups. This is not a perfect solution but it does lead to significant
speedups on machines with many CPU cores.
This requires adding a new pmap_init_percpu() function that gets called
at the point where kernel is ready to set up the per-CPU pool caches.
Dummy implementations of this function are added for all non-arm64
architectures. Some other architectures can probably benefit from
providing an actual implementation that sets up per-CPU caches for
pmap pools as well.
ok phessler@, claudio@, miod@, patrick@
Diffstat (limited to 'sys/arch/arm64/include')
-rw-r--r-- | sys/arch/arm64/include/pmap.h | 6 |
1 files changed, 4 insertions, 2 deletions
diff --git a/sys/arch/arm64/include/pmap.h b/sys/arch/arm64/include/pmap.h index 8d5bbf2eaa9..129fa4079b0 100644 --- a/sys/arch/arm64/include/pmap.h +++ b/sys/arch/arm64/include/pmap.h @@ -1,4 +1,4 @@ -/* $OpenBSD: pmap.h,v 1.24 2023/06/10 19:30:48 kettenis Exp $ */ +/* $OpenBSD: pmap.h,v 1.25 2023/12/11 22:12:53 kettenis Exp $ */ /* * Copyright (c) 2008,2009,2014 Dale Rahn <drahn@dalerahn.com> * @@ -101,6 +101,9 @@ extern struct pmap kernel_pmap_; vaddr_t pmap_bootstrap(long kvo, paddr_t lpt1, long kernelstart, long kernelend, long ram_start, long ram_end); +void pmap_postinit(void); +void pmap_init_percpu(void); + void pmap_kenter_cache(vaddr_t va, paddr_t pa, vm_prot_t prot, int cacheable); void pmap_page_ro(pmap_t pm, vaddr_t va, vm_prot_t prot); void pmap_page_rw(pmap_t pm, vaddr_t va); @@ -118,7 +121,6 @@ struct pv_entry; /* investigate */ #define pmap_unuse_final(p) do { /* nothing */ } while (0) int pmap_fault_fixup(pmap_t, vaddr_t, vm_prot_t); -void pmap_postinit(void); #define __HAVE_PMAP_MPSAFE_ENTER_COW |