Age | Commit message (Collapse) | Author |
|
"put this process to sleep" and "find a process to run" operations.
no functional change. ok art@
|
|
shuffle functions around so that scheduler is all together.
no real functional changes. ok art@ testing miod@
|
|
encapsulating all such access into wall-defined functions
that makes sure locking is done as needed.
It also cleans up some uses of wall time vs. uptime some
places, but there is sure to be more of these needed as
well, particularily in MD code. Also, many current calls
to microtime() should probably be changed to getmicrotime(),
or to the {,get}microuptime() versions.
ok art@ deraadt@ aaron@ matthieu@ beck@ sturm@ millert@ others
"Oh, that is not your problem!" from miod@
|
|
things such that code that only need a second-resolution uptime or wall
time, and used to get that from time.tv_secs or mono_time.tv_secs now get
this from separate time_t globals time_second and time_uptime.
ok art@ niklas@ nordin@
|
|
scheduling errors on non-i386 yet.
deraadt@ aaron@ ok
|
|
|
|
Introduce the cpu_info structure, p_cpu field in struct proc and global
scheduling context and various changed code to deal with this. At the
moment no architecture uses this stuff yet, but it will allow us slow and
controlled migration to the new APIs.
All new code is ifdef:ed out.
ok deraadt@ niklas@
|
|
is just going to annoy people
|
|
|
|
|
|
p->p_rtime in this case instead of zeroing it; based on an idea
from nordin@. Also add a printf about microtime() not being monotonic
for this case (from miod@) #ifdef DIAGNOSTIC. This version OK otto@
|
|
Also add a check for a negative result when subtracting microtime(&now)
from runtime and simply treat this as zero. This should *not* happen
but due to an apparent bug in microtime on dual clock machines, it does.
The microtime bug is currently being examined.
Based on a diff from miod@ with help from otto@; ok deraadt@ otto@
|
|
|
|
rescinded 22 July 1999. Proofed by myself and Theo.
|
|
matthieu, who noted it now that X is not running as root. ok nordin
|
|
|
|
|
|
declarations (extern int foo), and compensate in the appropriate locations.
|
|
|
|
|
|
|
|
|
|
(Look ma, I might have broken the tree)
|
|
|
|
ltsleep takes an additional argument - a simplelock and unlocks it when it's
safe to do so.
tsleep now becomes a wrapper around ltsleep.
From NetBSD
|
|
|
|
|
|
|
|
|
|
|
|
|
|
not once per process.
|
|
traced proc. The vnode is in the proc and all functions need the proc.
|
|
|
|
|
|
|
|
cpu_exit no longer frees the vmspace and u-area. This is now handled by a
separate kernel thread "reaper". This is to avoid sleeping locks in the
critical path of cpu_exit where we're not allowed to sleep.
From NetBSD
|
|
flags (much nicer for future smp work).
Add two generic functions yield() and preempt(). Use preepmt() in uio when
we are told to yield.
Based on my idea, code written by Jason Thorpe from NetBSD.
|
|
The function and the argument never change.
|
|
|
|
|
|
context switch actually happens.
|
|
|
|
|
|
problems when stathz runs at different speed than hz/profhz.
|
|
commit messages:
Scheduler bug fixes and reorganization
* fix the ancient nice(1) bug, where nice +20 processes incorrectly
steal 10 - 20% of the CPU, (or even more depending on load average)
* provide a new schedclock() mechanism at a new clock at schedhz, so high
platform hz values don't cause nice +0 processes to look like they are
niced
* change the algorithm slightly, and reorganize the code a lot
* fix percent-CPU calculation bugs, and eliminate some no-op code
=== nice bug === Correctly divide the scheduler queues between niced and
compute-bound processes. The current nice weight of two (sort of, see
`algorithm change' below) neatly divides the USRPRI queues in half; this
should have been used to clip p_estcpu, instead of UCHAR_MAX. Besides
being the wrong amount, clipping an unsigned char to UCHAR_MAX is a no-op,
and it was done after decay_cpu() which can only _reduce_ the value. It
has to be kept <= NICE_WEIGHT * PRIO_MAX - PPQ or processes can
scheduler-penalize themselves onto the same queue as nice +20 processes.
(Or even a higher one.)
=== New schedclock() mechanism === Some platforms should be cutting down
stathz before hitting the scheduler, since the scheduler algorithm only
works right in the vicinity of 64 Hz. Rather than prescale hz, then scale
back and forth by 4 every time p_estcpu is touched (each occurance an
abstraction violation), use p_estcpu without scaling and require schedhz
to be generated directly at the right frequency. Use a default stathz (well,
actually, profhz) / 4, so nothing changes unless a platform defines schedhz
and a new clock.
[ To do: Define these for alpha, where hz==1024, and nice was totally broke.]
=== Algorithm change === The nice value used to be added to the
exponentially-decayed scheduler history value p_estcpu, in _addition_ to
be incorporated directly (with greater weight) into the priority calculation.
At first glance, it appears to be a pointless increase of 1/8 the nice
effect (pri = p_estcpu/4 + nice*2), but it's actually at least 3x that
because it will ramp up linearly but be decayed only exponentially, thus
converging to an additional .75 nice for a loadaverage of one. I killed
this: it makes the behavior hard to control, almost impossible to analyze,
and the effect (~~nothing at for the first second, then somewhat increased
niceness after three seconds or more, depending on load average) pointless.
=== Other bugs === hz -> profhz in the p_pctcpu = f(p_cpticks) calcuation.
Collect scheduler functionality. Try to put each abstraction in just one
place.
|
|
|
|
|
|
|
|
|