Age | Commit message (Collapse) | Author |
|
From NetBSD.
|
|
Handle this when syncing filesystems when unmounting.
From NetBSD.
|
|
of looking at v_dirtyblkhd.
|
|
|
|
syncers work list.
From NetBSD.
|
|
Used by the new soft updates code.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
EWOULDBLOCK. turns out the two checking conditions were not the same,
and a certain use of rsync uncovered the bug by chewing all available
cpu time; fix from art
|
|
|
|
returned is in 512-byte sectors, so if you're going to use it for
things like DVD, you need to divide the result by 4 (for 2048-byte
sectors). OK deraadt@
|
|
|
|
from NetBSD, Sat Jun 26 08:25:25 1999 UTC by augustss:
Add powerhooks, i.e., the ability to register a function that will be
called when the machine does a suspend or resume.
XXX Will go away when Jason's kevents come to life.
|
|
|
|
|
|
|
|
We never used it and some parts of it slowed the code down.
Generally clean up the pipe code.
|
|
|
|
|
|
This fix is taken from BSD/OS (the file in question being BSD licensed).
It continues to remove a datagram from a socket receive buffer even if there is
an error on the copy-out, so as to leave the buffer in a reasonable state.
Before, the kernel would stop in mid-receive if the copy-out failed, and the
buffer's structural requirements would be violated (since the start of a
datagram must be an address iff ).
Note that if the user provides any invalid addresses as arguments to a
recvmsg(), the datagram at the front of the buffer will be discarded. The more
correct behavior would be not to remove this datagram if the arguments are
invalid. Implementing this behavior requires a lot of significant changes, and
socket receives are a critical path.
Also included are two simple and fairly obvious fixes from the same source.
If non-blocking I/O is set, it makes sure the receieve is non-blocking. It also
fixes a slightly over-aggressive optimization.
|
|
names more carefully; art
|
|
|
|
add more support for external managers
|
|
|
|
|
|
|
|
|
|
|
|
from. Hopefully this will fix all the hangs/panics where the root device
was not found.
|
|
|
|
|
|
|
|
They do the same thing, but the former is noticeably faster on sparc
|
|
|
|
|
|
problems when stathz runs at different speed than hz/profhz.
|
|
|
|
|
|
an uninitialized variable; art@stacken.kth.se
|
|
|
|
to, at the bottom or the top, depending on your architecture's stack growth
direction. This is in preparation for Linux' clone(2) emulation.
port maintainers, please check that I did the work right.
|
|
commit messages:
Scheduler bug fixes and reorganization
* fix the ancient nice(1) bug, where nice +20 processes incorrectly
steal 10 - 20% of the CPU, (or even more depending on load average)
* provide a new schedclock() mechanism at a new clock at schedhz, so high
platform hz values don't cause nice +0 processes to look like they are
niced
* change the algorithm slightly, and reorganize the code a lot
* fix percent-CPU calculation bugs, and eliminate some no-op code
=== nice bug === Correctly divide the scheduler queues between niced and
compute-bound processes. The current nice weight of two (sort of, see
`algorithm change' below) neatly divides the USRPRI queues in half; this
should have been used to clip p_estcpu, instead of UCHAR_MAX. Besides
being the wrong amount, clipping an unsigned char to UCHAR_MAX is a no-op,
and it was done after decay_cpu() which can only _reduce_ the value. It
has to be kept <= NICE_WEIGHT * PRIO_MAX - PPQ or processes can
scheduler-penalize themselves onto the same queue as nice +20 processes.
(Or even a higher one.)
=== New schedclock() mechanism === Some platforms should be cutting down
stathz before hitting the scheduler, since the scheduler algorithm only
works right in the vicinity of 64 Hz. Rather than prescale hz, then scale
back and forth by 4 every time p_estcpu is touched (each occurance an
abstraction violation), use p_estcpu without scaling and require schedhz
to be generated directly at the right frequency. Use a default stathz (well,
actually, profhz) / 4, so nothing changes unless a platform defines schedhz
and a new clock.
[ To do: Define these for alpha, where hz==1024, and nice was totally broke.]
=== Algorithm change === The nice value used to be added to the
exponentially-decayed scheduler history value p_estcpu, in _addition_ to
be incorporated directly (with greater weight) into the priority calculation.
At first glance, it appears to be a pointless increase of 1/8 the nice
effect (pri = p_estcpu/4 + nice*2), but it's actually at least 3x that
because it will ramp up linearly but be decayed only exponentially, thus
converging to an additional .75 nice for a loadaverage of one. I killed
this: it makes the behavior hard to control, almost impossible to analyze,
and the effect (~~nothing at for the first second, then somewhat increased
niceness after three seconds or more, depending on load average) pointless.
=== Other bugs === hz -> profhz in the p_pctcpu = f(p_cpticks) calcuation.
Collect scheduler functionality. Try to put each abstraction in just one
place.
|