Age | Commit message (Collapse) | Author |
|
- Move the functionality of choosing a process from cpu_switch into
a much simpler function: cpu_switchto. Instead of having the locore
code walk the run queues, let the MI code choose the process we
want to run and only implement the context switching itself in MD
code.
- Let MD context switching run without worrying about spls or locks.
- Instead of having the idle loop implemented with special contexts
in MD code, implement one idle proc for each cpu. make the idle
loop MI with MD hooks.
- Change the proc lists from the old style vax queues to TAILQs.
- Change the sleep queue from vax queues to TAILQs. This makes
wakeup() go from O(n^2) to O(n)
there will be some MD fallout, but it will be fixed shortly.
There's also a few cleanups to be done after this.
deraadt@, kettenis@ ok
|
|
eyeballed and ok dlg@
|
|
"flt_noram1" would get truncated otherwise.
ok deraadt
|
|
a new struct. Instead of doing a huge rename and deal with the fallout
for weeks, like other projects that need no mention, we will slowly and
carefully move things out of struct proc into a new struct process.
- Create struct process and the infrastructure to create and remove them.
- Move threads in a process into struct process.
deraadt@, tedu@ ok
|
|
it's a good idea to use atomic.h operations on it. This mechanic
change updates all bit operations on p_flag to atomic_{set,clear}bits_int.
Only exception is that P_OWEUPC is set by MI code before calling
need_proftick and it's automatically cleared by ADDUPC. There's
no reason for MD handling of that flag since everyone handles it the
same way.
kettenis@ ok
|
|
|
|
doing functional code; ie. LIST_INIT()
|
|
uses rfork(RFTHREAD) to create threads, which are presently processes
that are a little more tightly bound together. several new syscalls
added to facilitate a userland thread library.
all conditional on RTHREADS, currently disabled.
ok deraadt
|
|
'go for it' deraadt@
|
|
|
|
somebody else didn't beat us in uid_find().
|
|
restrict lock count per uid to a global limit, add sysctl to adjust limit.
this prevents a user from creating too many locks. problem noticed
by devon o'dell. ok deraadt miod pedro
|
|
no change in compiler assembly output.
|
|
|
|
|
|
shuffle functions around so that scheduler is all together.
no real functional changes. ok art@ testing miod@
|
|
Also move the whole deadproc infrastructure to kern_exit, it's only used
there.
miod@ ok
|
|
|
|
usage more correct and fix a signed/unsigned format mismatch.
Based on a patch from Patrick Latifi. OK deraadt@
|
|
rescinded 22 July 1999. Proofed by myself and Theo.
|
|
Accessing p_md members from MI code is not legal.
|
|
|
|
|
|
|
|
|
|
|
|
well (not at all) with shortages of the vm_map where the pages are mapped
(usually kmem_map).
Try to deal with it:
- group all information the backend allocator for a pool in a separate
struct. The pool will only have a pointer to that struct.
- change the pool_init API to reflect that.
- link all pools allocating from the same allocator on a linked list.
- Since an allocator is responsible to wait for physical memory it will
only fail (waitok) when it runs out of its backing vm_map, carefully
drain pools using the same allocator so that va space is freed.
(see comments in code for caveats and details).
- change pool_reclaim to return if it actually succeeded to free some
memory, use that information to make draining easier and more efficient.
- get rid of PR_URGENT, noone uses it.
|
|
|
|
|
|
cpu_exit no longer frees the vmspace and u-area. This is now handled by a
separate kernel thread "reaper". This is to avoid sleeping locks in the
critical path of cpu_exit where we're not allowed to sleep.
From NetBSD
|
|
Add an extra flag to hashinit telling if it should wait in malloc.
update all calls to hashinit.
|
|
|
|
|
|
|
|
|
|
|