Age | Commit message (Collapse) | Author |
|
- thread_sleep_msg() -> uvm_sleep()
- initialize reference count lock in uvm_anon_{init,add}()
- add uao_flush()
- replace boolean 'islocked' with 'lockflags'
- in uvm_fault() change FALSE to TRUE to in 'wide' fault handling
- get rid of uvm_km_get()
- various bug fixes
|
|
|
|
boolean_t pmap_extract(struct pmap *, vaddr_t, paddr_t *).
Matches NetBSD. Tested by various people on various platforms.
|
|
The highlight is some more advices to madvise(2).
o MADV_DONTNEED will deactive the pages in the given range giving a quicker
reuse.
o MADV_FREE will garbage-collect the pages and swap resources causing the
next fault to either page in new pages from backing store (mapped vnode)
or allocate new zero-fill pages (anonymous mapping).
|
|
From NetBSD.
|
|
|
|
- Change pmap_change_wiring to pmap_unwire because it's only called that way.
- Remove pmap_pageable because it's seldom implemented and when it is, it's
either almost useless or incorrect. The same information is already passed
to the pmap anyway by pmap_enter and pmap_unwire.
|
|
- make sure that vsunlock doesn't unwire mlocked memory.
- fix locking in uvm_useracc.
- Return the error uvm_fault_wire in uvm_vslock (will be used soon).
|
|
|
|
We might want to use them on types that are bigger than vaddr_t.
Fix all callers that pass pointers without casts.
|
|
CLSIZE -> 1
CLBYTES -> PAGE_SIZE
OLOFSET -> PAGE_MASK
etc.
At the same time some archs needed some cleaning in vmparam.h so that
goes in at the same time.
|
|
1GB i386 machines needs this. The fix is heavily based on Jason Thorpe's
found in NetBSD. Here is his original commit message:
Instead of checking vm_physmem[<physseg>].pgs to determine if
uvm_page_init() has completed, add a boolean uvm.page_init_done,
and test against that. Use this same boolean (rather than
pmap_initialized) in pmap_growkernel() to determine if we are
being called via uvm_page_init() to grow the kernel address space.
This fixes a problem on some i386 configurations where pmap_init()
itself was needing to have the kernel page table grown, and since
pmap_initialized was not yet set to TRUE, pmap_growkernel() was
choosing the wrong code path.
|
|
when we free it; art@ ok
|
|
on NetBSD's code, as well as some faked Posix RT extensions by me. This makes
at least simple linuxthreads tests work.
|
|
|
|
|
|
|
|
|
|
Be more careful in amap_alloc1 when handling failed allocations. We could have
incorrectly returned success when the first or second of three allocations
failed.
|
|
instead of depending on the callers to do that. (which they don't)
|
|
|
|
|
|
Implements mincore(2), mlockall(2) and munlockall(2). mlockall and munlockall
are disabled for the moment.
The rest is mostly cosmetic.
|
|
with a 'make build'. From NetBSD. art@ ok
|
|
just got cranked a little while ago. discussion with millert
|
|
wrapper, so this removes a dependence on the old VM system. From NetBSD.
art@ ok
|
|
Otherwise we can end up in a situation where the syncer waits for pages
and the pagedaemon waits for buffers.
|
|
With soft updates, writing out pages to disk can cause a bunch of allocations.
|
|
Change VM/UVM to use buf_replacevnode to change the vnode associated
with a buffer.
Addition v_bioflag for flags written in interrupt handlers
(and read at splbio, though not strictly necessary)
Add vwaitforio and use it instead of a while loop of v_numoutput.
Fix race conditions when manipulation vnode free list
|
|
|
|
|
|
|
|
|
|
This is to match (make diffs smaller) the code in NetBSD.
new gcc inlines those functions, so this could also be a performance win.
|
|
|
|
Only change vm_dsize if the allocation succeeded.
From Jason Thorpe.
|
|
Plus misc cleanup.
|
|
|
|
break swap paritions into sections, each section has own
encryption key. if a section's key becomes unreferenced, erase it.
|
|
|
|
large memory machines. This time I really hope we can continue quite a bit
away over the Gig.
|
|
|
|
|
|
sysctl.
|
|
|
|
|
|
- Introduce a new type of map that are interrupt safe and never allow faults
in them. mb_map and kmem_map are made intrsafe.
- Add "access protection" to uvm_vslock (to be passed down to uvm_fault and
later to pmap_enter).
- madvise(2) now works.
- various cleanups.
|
|
|
|
|
|
|