Age | Commit message (Collapse) | Author |
|
uvm_unmap, uvm_deallocate and a few other functions.
Simplifies some code and reduces diff to the UBC branch.
|
|
should be equal to "entry->end". (len is never changed)
|
|
- fix a typo in comment.
- enable uvm_tree_sanity ifdef DEBUG
|
|
vsize_t instead.
art@ ok
|
|
|
|
|
|
to add splassert(IPL_NONE) in a few strategic places.
|
|
|
|
niels@ ok.
|
|
|
|
|
|
|
|
|
|
uvm_tree_sanity is left as debugging help but needs to be enabled manually.
okay art@
|
|
map_findspace is still broken on alpha. this will make debugging easier.
okay millert@
|
|
the tree triggers the bug, PMAP_PREFER case was broken also.
|
|
tree to find free space between entries. speeds up memory allocation,
etc...
|
|
We allocate map entries for the non-intrsafe kernel map (most notably
kernel_map and exec_map) from a pool that's backed by kmem_map (to avoid
deadlocking).
This should get rid of MAX_KMAPENT panics.
|
|
well (not at all) with shortages of the vm_map where the pages are mapped
(usually kmem_map).
Try to deal with it:
- group all information the backend allocator for a pool in a separate
struct. The pool will only have a pointer to that struct.
- change the pool_init API to reflect that.
- link all pools allocating from the same allocator on a linked list.
- Since an allocator is responsible to wait for physical memory it will
only fail (waitok) when it runs out of its backing vm_map, carefully
drain pools using the same allocator so that va space is freed.
(see comments in code for caveats and details).
- change pool_reclaim to return if it actually succeeded to free some
memory, use that information to make draining easier and more efficient.
- get rid of PR_URGENT, noone uses it.
|
|
This unbreaks m68k m88k sparc and perhaps others, which eventually froze
when hitting swap.
Tested by various people on various platforms.
ok art@
|
|
machines or some configurations or in some phase of the moon (we actually
don't know when or why) files disappeared. Since we've not been able to
track down the problem in two weeks intense debugging and we need -current
to be stable, back out everything to a state it had before UBC.
We apologise for the inconvenience.
|
|
Today we add a pmap argument to pmap_update() and allocate map entries for
kernel_map from kmem_map instead of using the static entries. This should
get rid of MAX_KMAPENT panics. Also some uvm_loan problems are fixed.
|
|
Contains also support for page coloring.
|
|
This time we're getting rid of KERN_* and VM_PAGER_* error codes and
use errnos instead.
|
|
|
|
|
|
|
|
for the virtual address.
|
|
- Use malloc/free instead of MALLOC/FREE for variable sized allocations.
- Move the memory inheritance code to sys/mman.h and rename from VM_* to MAP_*
- various cleanups and simplifications.
|
|
The only thing left in vm/ are just dumb wrappers.
vm/vm.h includes uvm/uvm_extern.h
vm/pmap.h includes uvm/uvm_pmap.h
vm/vm_page.h includes uvm/uvm_page.h
|
|
|
|
|
|
Including support for zeroing pages in the idle loop (not enabled yet).
|
|
into objects.
Gives the possibilty to mmap beyond the size of vaddr_t.
From NetBSD.
|
|
|
|
The archs that didn't have a proper PMAP_NEW now have a dummy implementation
with wrappers around the old functions.
|
|
|
|
under locked conditions. we currently use a 2-parm SAVE_HINT... to meet
the same functionality, we instead need to validate the hint is the one
CURRENTLY in the map before substituing it, and we need to do that while
the lock is retained.
|
|
to allow bpf to manage shared address space.
|
|
- thread_sleep_msg() -> uvm_sleep()
- initialize reference count lock in uvm_anon_{init,add}()
- add uao_flush()
- replace boolean 'islocked' with 'lockflags'
- in uvm_fault() change FALSE to TRUE to in 'wide' fault handling
- get rid of uvm_km_get()
- various bug fixes
|
|
The highlight is some more advices to madvise(2).
o MADV_DONTNEED will deactive the pages in the given range giving a quicker
reuse.
o MADV_FREE will garbage-collect the pages and swap resources causing the
next fault to either page in new pages from backing store (mapped vnode)
or allocate new zero-fill pages (anonymous mapping).
|
|
From NetBSD.
|
|
- make sure that vsunlock doesn't unwire mlocked memory.
- fix locking in uvm_useracc.
- Return the error uvm_fault_wire in uvm_vslock (will be used soon).
|
|
|
|
|
|
instead of depending on the callers to do that. (which they don't)
|
|
Implements mincore(2), mlockall(2) and munlockall(2). mlockall and munlockall
are disabled for the moment.
The rest is mostly cosmetic.
|
|
|
|
|
|
This is to match (make diffs smaller) the code in NetBSD.
new gcc inlines those functions, so this could also be a performance win.
|