summaryrefslogtreecommitdiff
path: root/sys/arch/m88k
diff options
context:
space:
mode:
authorMark Kettenis <kettenis@cvs.openbsd.org>2023-12-11 22:12:54 +0000
committerMark Kettenis <kettenis@cvs.openbsd.org>2023-12-11 22:12:54 +0000
commit0de6004f3c4b6c0d9c50014c604793d658cd1421 (patch)
treed7228f737247d13c5e1dcc0b85ebbfebd24fcf8e /sys/arch/m88k
parent70e28086c36f6c95d45010d325ff4280b90c00be (diff)
Implement per-CPU caching for the page table page (vp) pool and the PTE
descriptor (pted) pool in the arm64 pmap implementation. This significantly reduces the side-effects of lock contention on the kernel map lock that is (incorrectly) translated into excessive page daemon wakeups. This is not a perfect solution but it does lead to significant speedups on machines with many CPU cores. This requires adding a new pmap_init_percpu() function that gets called at the point where kernel is ready to set up the per-CPU pool caches. Dummy implementations of this function are added for all non-arm64 architectures. Some other architectures can probably benefit from providing an actual implementation that sets up per-CPU caches for pmap pools as well. ok phessler@, claudio@, miod@, patrick@
Diffstat (limited to 'sys/arch/m88k')
-rw-r--r--sys/arch/m88k/include/pmap.h3
1 files changed, 2 insertions, 1 deletions
diff --git a/sys/arch/m88k/include/pmap.h b/sys/arch/m88k/include/pmap.h
index 746ed8ec8dc..735945313f9 100644
--- a/sys/arch/m88k/include/pmap.h
+++ b/sys/arch/m88k/include/pmap.h
@@ -1,4 +1,4 @@
-/* $OpenBSD: pmap.h,v 1.29 2023/04/13 15:23:22 miod Exp $ */
+/* $OpenBSD: pmap.h,v 1.30 2023/12/11 22:12:53 kettenis Exp $ */
/*
* Mach Operating System
* Copyright (c) 1991 Carnegie Mellon University
@@ -64,6 +64,7 @@ int pmap_set_modify(pmap_t, vaddr_t);
void pmap_unmap_firmware(void);
boolean_t pmap_unsetbit(struct vm_page *, int);
+#define pmap_init_percpu() do { /* nothing */ } while (0)
#define pmap_unuse_final(p) /* nothing */
#define pmap_remove_holes(vm) do { /* nothing */ } while (0)