Age | Commit message (Collapse) | Author |
|
- set size argument of free()
- remove pointless if expression around free() call
ok guenther@
|
|
|
|
softintr_dispatch(). Delete traces of long superseded stats code.
ok beck@ mpi@ uebayasi@
|
|
|
|
testing by krw@, and then many via snapshots
|
|
machine/lock.h as appropriate.
|
|
|
|
- rename uiomove() to uiomovei() and update all its users.
- introduce uiomove(), which is similar to uiomovei() but with a size_t.
- rewrite uiomovei() as an uiomove() wrapper.
ok kettenis@
|
|
within a range that is more (or less) restrictive than the default range.
ok deraadt@, stsp@
|
|
|
|
bounce buffer and before we copy data *from* the bounce buffer. Currently
_bus_dmamap_sync() is a no-op, but keeping it #ifdef'ed out in the wrong
place makes no sense.
ok deraadt@, miod@
|
|
after discussions with beck deraadt kettenis.
|
|
ok dlg@ mpi@ deraadt@
|
|
Use an explicit suffix for the "fld" instruction to shut up clang. The correct
instruction is fldl since we try to load a double-precision value.
GCC actually gets it wrong and emits "flds" (which is harmless).
ok guenther@
|
|
|
|
we block all interrupts that can grab the kernel lock. The simplest way to
achieve this is to make sure mutexes always raise the ipl to the highest
level that has interrupts that grab the kernel lock. This will allow us
to have "mpsafe" interrupt handlers at lower priority levels.
No change for non-MULTIPROCESSOR kernels.
ok matthew@
|
|
kernel lock upon entry through a new IPL_MPSAFE flag/level.
|
|
ok kettenis
|
|
cycles per second isnt reliable, particularly inside "virtual" machines.
cpuspeed can be calculated as 0, which causes a divide by zero later on
which is bad.
this goes to more effort to detect if the performance counters are in use
by the hypervisor, or detecting if they gave us a cpuspeed of 0 so we can
fall through to using rdtsc.
ok jsg@
|
|
Prevents strange hang-ups during reboot. Joint work with hshoexer@.
ok mikeb@, mlarkin@, miod@, deraadt@
|
|
the case we verify if the CPU supports a specific version of the
architectural performance monitoring feature and read out the current
frequency from the fixed-function performance counter of the unhalted
core.
My initial motivation to implement this was the Soekris net6501-70
which comes with an Intel Atom E6xx 1.60GHz CPU. It has a constant
time stamp counter plus speed step support and boots on the lowest
frequency of 600MHz. This caused hw.cpuspeed and hw.setperf to
reflect the wrong values.
The diff is a cooperation work with jsg@. The fixed-function
performance counter read code comes from a former diff of him.
OK jsg@
|
|
as it causes hangs in some ports, including libsigsegv's configure script
confirmed by krw@, landry@
|
|
hold the kernel lock, but still need call one function that needs it.
Instead of grabbing the lock all over the place, move the locks into
the affected functions: trapsignal, scdebug*, ktrsyscall, ktrsysret,
systrace_redirect and ADDUPROF. In the cases we already hold the biglock
we'll just recurse.
kettenis@, beck@ ok
|
|
KERNEL_PROC_LOCK -> KERNEL_LOCK
KERNEL_PROC_UNLOCK -> KERNEL_UNLOCK
oga@ ok
|
|
i386 disobeys the Nth commandment. Fix this. While here, make i386 and amd64
definitions of iplclock and statclock match.
ok art@, kettenis@
|
|
(interrupt was not for me), 1 (positive interrupt was for me), or -1
(i am not sure...). We have continued with this practice in as many
drivers as possible, throughout the tree.
This makes some of the architectures use that information in their
interrupt handler calling code -- if 1 is returned (and we know
this specific machine does not have edge-shared interrupts), we
finish servicing other possible handlers on the same pin. If the
interrupt pin remains asserted (from a different device), we will
end up back in the interrupt servicing code of course... but this is
cheaper than calling all the chained interrupts on a pin.
This does of course count on shared level interrupts being properly
sorted by IPL.
There have been some concerns about starvation of drivers which
incorrectly return 1. Those drivers should be hunted down so that
they return -1.
ok and help from various people. In snaps for about a week now.
|
|
mask out invalid bits to prevent a protect fault.
Original diff by joshe@; further feedback and ok kettenis@
|
|
ok krw, miod
|
|
for it. This makes the netisr a real C function which will help further
development. No noticable performance change on i386 and amd64.
With input from kettenis@ and miod@ additional OKs mikeb@ and henning@
|
|
|
|
|
|
This prevents a protection fault if a userland signal handler
scribbles all over it's struct sigcontext
Help from and ok guenther@ kettenis@
|
|
hierarchy. Everything attached to a single root node anyway, so at
best we had a bush.
"i think it is good" deraadt@
|
|
Dell Inspirion 4150 to wake up immediately even though RTC_EN isn't set
in the PM1 Enable register.
ok deraadt@, mlarkin@
|
|
|
|
all jumbled up in the same functions. the rtc (mc chip) and clock (i8243)
startup was also mixed up. they the soft state and hardware state can
be started in the right order, and it is easy to restart just the
neccessary parts upon resume. tested in numerous cases:
(apic, pic) * (GENERIC.MP, GENERIC) * (mp, non-mp) * (i386, amd64)
ok kettenis
|
|
is now shared with all processes/threads. As a result, you can now use the
FPU in true process context (instead of just in kernel threads), but you
need to make sure you restore the default FPU state before calling
fpu_kernel_exit() if you change rounding mode, precision or exception masks.
Lots of discussion with thib@ and Mike Belopuhov.
ok thib@, deraadt@
|
|
ok deraadt@
|
|
context of some random process that happened to be switched onto the FPU
after the decision was made to send the IPI.
|
|
the FPU in the kernel.
From Mike Belopuhov; Little bits by myself.
Comments/OK kettenis@
|
|
the weird "pass by reference" that causes problems with gcc4.
ok nicm@, tom@
|
|
1) when you have a wrapper function in a dmatag that just calls the
_bus_dmamem original, you don't need it, just put the original function
in the tag
2) don't trunc_page the avail_end/ISA_BOUNCE_THRESHOLD stuff (see icb
for a discussion of why this is wrong about 00:00 gmt). make i386 and
amd64 both do this the same (the amd64 way is cleaner and makes the
third diff actually possible without a lot of pain). just do
dmamem_alloc_range(0, threshold) and if that fails do a alloc_range(0,
-1) and assume we'll bounce to pick up the pieces. Also using avail_end
for alloc_range is not nice (miod has been trying to avoid these abuses
iirc), so just use (paddr_t)-1, which is equivalent since you want "any"
memory.
3) now this is the funny one. consider point 2. then considering why
using the same bloody function to allocate your bouncebuffer is just
f'ing wrong. instead allocate with alloc_range(0, threshold) to make
sure that our bouncebuffer is actually uner 16megs.
ok deraadt@, kettenis@. Tested by several people.
|
|
|
|
ok deraadt@
|
|
issue reported by Slava Pestov.
ok deraadt@
|
|
Tested by myself, sthen, oga, kettenis, and jasper.
Input from sthen and jasper.
ok kettenis
(Manpage follows shortly.)
|
|
between instances, saving space in the kernel. feedback from many (some
incorporated, some left for future work).
ok deraadt, kettenis, "why not" miod.
|
|
a define needed to get to ``private'' functions that needs to be defined
5 or more times isn't much use and may cause namespace issues anyway.
Other archs will probably follow.
Discussed in portugal. "Hell yes" weingart@, ok kettenis@, no
objections miod@
|
|
interrupt. On some machines the rtc doesn't generate interrupts and we would
end up not running statclock() at all.
ok miod@, art@
|
|
amd64 isa dma code is identical save for some formatting, and a slight
difference in bus_dmamem_alloc.
"Die x86_!" krw@.
|