summaryrefslogtreecommitdiff
path: root/sys/arch
diff options
context:
space:
mode:
authorcheloha <cheloha@cvs.openbsd.org>2020-08-23 21:38:48 +0000
committercheloha <cheloha@cvs.openbsd.org>2020-08-23 21:38:48 +0000
commitb2e0fb73546bcfaf752b03fb857d98fbbfc1a6c7 (patch)
tree5469db21bd2b0ccf79032c4a845aa7d7215f7bd7 /sys/arch
parent6446d40c4e87255ce69ace9d2d4d6505d7e71479 (diff)
amd64: TSC timecounter: prefix RDTSC with LFENCE
Regarding RDTSC, the Intel ISA reference says (Vol 2B. 4-545): > The RDTSC instruction is not a serializing instruction. > > It does not necessarily wait until all previous instructions > have been executed before reading the counter. > > Similarly, subsequent instructions may begin execution before the > read operation is performed. > > If software requires RDTSC to be executed only after all previous > instructions have completed locally, it can either use RDTSCP (if > the processor supports that instruction) or execute the sequence > LFENCE;RDTSC. To mitigate this problem, Linux and DragonFly use LFENCE. FreeBSD and NetBSD take a more complex route: they selectively use MFENCE, LFENCE, or CPUID depending on whether the CPU is AMD, Intel, VIA or something else. Let's start with just LFENCE. We only use the TSC as a timecounter on SSE2 systems so there is no need to conditionally compile the LFENCE. We can explore conditionally using MFENCE later. Microbenchmarking on my machine (Core i7-8650) suggests a penalty of about 7-10% over a "naked" RDTSC. This is acceptable. It's a bit of a moot point though: the alternative is a considerably weaker monotonicity guarantee when comparing timestamps between threads, which is not acceptable. It's worth noting that kernel timecounting is not *exactly* like userspace timecounting. However, they are similar enough that we can use userspace benchmarks to make conjectures about possible impacts on kernel performance. Concerns about kernel performance, in particular the network stack, were the blocking issue for this patch. Regarding networking performance, claudio@ says a 10% slower nanotime(9) or nanouptime(9) is acceptable and that shaving off "tens of cycles" is a micro-optimization. There are bigger optimizations to chase down before such a difference would matter. There is additional work to be done here. We could experiment with conditionally using MFENCE. Also, the userspace TSC timecounter doesn't have access to the adjustment skews available to the kernel timecounter. pirofti@ has suggested a scheme involving RDTSCP and an array of skews mapped into user memory. deraadt@ has suggested a scheme where the skew would be kept in the TCB. However it is done, access to the skews will improve monotonicity, which remains a problem with the TSC. First proposed by kettenis@ and pirofti@. With input from pirofti@, deraadt@, guenther@, naddy@, kettenis@, and claudio@. Based on similar changes in Linux, FreeBSD, NetBSD, and DragonFlyBSD. ok deraadt@ pirofti@ kettenis@ naddy@ claudio@
Diffstat (limited to 'sys/arch')
-rw-r--r--sys/arch/amd64/amd64/tsc.c4
1 files changed, 2 insertions, 2 deletions
diff --git a/sys/arch/amd64/amd64/tsc.c b/sys/arch/amd64/amd64/tsc.c
index e7c4b539daa..79730c607a7 100644
--- a/sys/arch/amd64/amd64/tsc.c
+++ b/sys/arch/amd64/amd64/tsc.c
@@ -1,4 +1,4 @@
-/* $OpenBSD: tsc.c,v 1.19 2020/07/06 13:33:06 pirofti Exp $ */
+/* $OpenBSD: tsc.c,v 1.20 2020/08/23 21:38:47 cheloha Exp $ */
/*
* Copyright (c) 2008 The NetBSD Foundation, Inc.
* Copyright (c) 2016,2017 Reyk Floeter <reyk@openbsd.org>
@@ -211,7 +211,7 @@ cpu_recalibrate_tsc(struct timecounter *tc)
u_int
tsc_get_timecount(struct timecounter *tc)
{
- return rdtsc() + curcpu()->ci_tsc_skew;
+ return rdtsc_lfence() + curcpu()->ci_tsc_skew;
}
void