diff options
author | Mark Kettenis <kettenis@cvs.openbsd.org> | 2023-12-11 22:12:54 +0000 |
---|---|---|
committer | Mark Kettenis <kettenis@cvs.openbsd.org> | 2023-12-11 22:12:54 +0000 |
commit | 0de6004f3c4b6c0d9c50014c604793d658cd1421 (patch) | |
tree | d7228f737247d13c5e1dcc0b85ebbfebd24fcf8e /sys/arch/powerpc64 | |
parent | 70e28086c36f6c95d45010d325ff4280b90c00be (diff) |
Implement per-CPU caching for the page table page (vp) pool and the PTE
descriptor (pted) pool in the arm64 pmap implementation. This
significantly reduces the side-effects of lock contention on the kernel
map lock that is (incorrectly) translated into excessive page daemon
wakeups. This is not a perfect solution but it does lead to significant
speedups on machines with many CPU cores.
This requires adding a new pmap_init_percpu() function that gets called
at the point where kernel is ready to set up the per-CPU pool caches.
Dummy implementations of this function are added for all non-arm64
architectures. Some other architectures can probably benefit from
providing an actual implementation that sets up per-CPU caches for
pmap pools as well.
ok phessler@, claudio@, miod@, patrick@
Diffstat (limited to 'sys/arch/powerpc64')
-rw-r--r-- | sys/arch/powerpc64/include/pmap.h | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/sys/arch/powerpc64/include/pmap.h b/sys/arch/powerpc64/include/pmap.h index c7f4c9e257b..6489f4af094 100644 --- a/sys/arch/powerpc64/include/pmap.h +++ b/sys/arch/powerpc64/include/pmap.h @@ -1,4 +1,4 @@ -/* $OpenBSD: pmap.h,v 1.18 2021/10/12 18:06:15 kettenis Exp $ */ +/* $OpenBSD: pmap.h,v 1.19 2023/12/11 22:12:53 kettenis Exp $ */ /* * Copyright (c) 2020 Mark Kettenis <kettenis@openbsd.org> @@ -64,6 +64,7 @@ extern struct pmap kernel_pmap_store; #define pmap_resident_count(pm) ((pm)->pm_stats.resident_count) #define pmap_wired_count(pm) ((pm)->pm_stats.wired_count) +#define pmap_init_percpu() do { /* nothing */ } while (0) #define pmap_unuse_final(p) #define pmap_remove_holes(vm) #define pmap_update(pm) |