Age | Commit message (Collapse) | Author |
|
Makes trace in ddb useful.
ok oga
|
|
Now, the PG_ RELEASED flag currently has two (maybe three) uses. The
valid one is for use with async io where we want to free the page after
we've paged it out. The other ones are "oh i'd like to free this, but
someone else is playing with it". It's simpler to just sleep on the
damned page instead and stop the fiddling.
First step does uao's: in uao_detach, sleep on the object and free it
when we're clean, instead of setting a flag so it's freed after. In
uao_flush, do the same. Change the interation over the object in flush
so that we don't have to add marker pages or other such voodoo to the
list when we sleep (netbsd did that when they had a similar diff), just
use the hash always. We can now change uao_releasepg() to just free the
page, and not bother with the KILLME stuff. When the other objects are
fixed this hook will vanish.
Much discussion with art@ over the idea, and ariane@ over this specific
diff. As mentioned, this one is based loosely on a similar idea in
netbsd.
Been in my tree for a while, survived many make builds, etc, and forcing
paging using ariane's evil program.
ok ariane@, beck@.
|
|
two cases of pool_get() + memset(0) -> pool_get(,,,PR_ZERO)
1.5 cases of global variables are already zeroed, so don't zero them.
ok ariane@, comments on stuff i'd missed from blambert@ and cnst@.
|
|
K&R function declarations, so switch them all over to ansi-style, in
accordance with the prophesy.
"go for it" art@
|
|
eyeballed and ok dlg@
|
|
to separate locking, on most modern machines this is not enough
since operations on short types touch other short types that share the
same word in memory.
Merge pg_flags and pqflags again and now use atomic operations to change
the flags. Also bump wire_count to an int and pg_version might go
int as well, just for alignment.
tested by many, many. ok miod@
|
|
from mickey
|
|
to "pg_flags" and "pg_version", so that they are a bit easier to work with.
Whoever uses generic names like this for a popular struct obviously doesn't
read much code.
Most architectures compile and there are no functionality changes.
deraadt@ ok ("if something fails to compile, we fix that by hand")
|
|
no change for normal code
|
|
miod@ ok
|
|
|
|
practise. Depending on the list implementation, this might or might
not work; so make it use a safe idiom. ok pedro@ millert@ deraadt@
|
|
no change in compiler assembly output.
|
|
|
|
well (not at all) with shortages of the vm_map where the pages are mapped
(usually kmem_map).
Try to deal with it:
- group all information the backend allocator for a pool in a separate
struct. The pool will only have a pointer to that struct.
- change the pool_init API to reflect that.
- link all pools allocating from the same allocator on a linked list.
- Since an allocator is responsible to wait for physical memory it will
only fail (waitok) when it runs out of its backing vm_map, carefully
drain pools using the same allocator so that va space is freed.
(see comments in code for caveats and details).
- change pool_reclaim to return if it actually succeeded to free some
memory, use that information to make draining easier and more efficient.
- get rid of PR_URGENT, noone uses it.
|
|
machines or some configurations or in some phase of the moon (we actually
don't know when or why) files disappeared. Since we've not been able to
track down the problem in two weeks intense debugging and we need -current
to be stable, back out everything to a state it had before UBC.
We apologise for the inconvenience.
|
|
Contains also support for page coloring.
|
|
This time we're getting rid of KERN_* and VM_PAGER_* error codes and
use errnos instead.
|
|
code is written mostly by Chuck Silvers <chuq@chuq.com>/<chs@netbsd.org>.
Tested for the past few weeks by many developers, should be in a pretty stable
state, but will require optimizations and additional cleanups.
|
|
|
|
|
|
- Use malloc/free instead of MALLOC/FREE for variable sized allocations.
- Move the memory inheritance code to sys/mman.h and rename from VM_* to MAP_*
- various cleanups and simplifications.
|
|
The only thing left in vm/ are just dumb wrappers.
vm/vm.h includes uvm/uvm_extern.h
vm/pmap.h includes uvm/uvm_pmap.h
vm/vm_page.h includes uvm/uvm_page.h
|
|
|
|
|
|
Including support for zeroing pages in the idle loop (not enabled yet).
|
|
into objects.
Gives the possibilty to mmap beyond the size of vaddr_t.
From NetBSD.
|
|
Improve error handling on I/O errors to swap.
From NetBSD
|
|
The archs that didn't have a proper PMAP_NEW now have a dummy implementation
with wrappers around the old functions.
|
|
- thread_sleep_msg() -> uvm_sleep()
- initialize reference count lock in uvm_anon_{init,add}()
- add uao_flush()
- replace boolean 'islocked' with 'lockflags'
- in uvm_fault() change FALSE to TRUE to in 'wide' fault handling
- get rid of uvm_km_get()
- various bug fixes
|
|
|
|
wrapper, so this removes a dependence on the old VM system. From NetBSD.
art@ ok
|
|
|
|
This is to match (make diffs smaller) the code in NetBSD.
new gcc inlines those functions, so this could also be a performance win.
|
|
- Introduce a new type of map that are interrupt safe and never allow faults
in them. mb_map and kmem_map are made intrsafe.
- Add "access protection" to uvm_vslock (to be passed down to uvm_fault and
later to pmap_enter).
- madvise(2) now works.
- various cleanups.
|
|
Mostly cleanups, but also a few improvements to pagedaemon for better
handling of low memory and/or low swap conditions.
|
|
Add an extra flag to hashinit telling if it should wait in malloc.
update all calls to hashinit.
|
|
|
|
|