Age | Commit message (Collapse) | Author |
|
|
|
pdhist one until ubc gaets back. art@ ok
|
|
Diff generated by Chris Kuethe.
|
|
|
|
instead of the pa. Most callers already had it handy and those who didn't
only called it for managed pages and were outside time-critical code.
This will allow us to make those functions clean and fast on sparc and
sparc64 letting us to avoid unnecessary cache flushes.
deraadt@ miod@ drahn@ ok.
|
|
vm_page.
From NetBSD.
|
|
|
|
|
|
This unbreaks m68k m88k sparc and perhaps others, which eventually froze
when hitting swap.
Tested by various people on various platforms.
ok art@
|
|
machines or some configurations or in some phase of the moon (we actually
don't know when or why) files disappeared. Since we've not been able to
track down the problem in two weeks intense debugging and we need -current
to be stable, back out everything to a state it had before UBC.
We apologise for the inconvenience.
|
|
so that we can get back the old behavior where a vnode with cached data
is less likely to be recycled than a vnode without cached data.
XXX - This is a brute-force solution - we do it where uvmexp.vnodepages
are changed, I am not really sure it is correct but people have been
very happy with the diff so far and want this in the tree.
|
|
Today we add a pmap argument to pmap_update() and allocate map entries for
kernel_map from kmem_map instead of using the static entries. This should
get rid of MAX_KMAPENT panics. Also some uvm_loan problems are fixed.
|
|
been converted to bus_dma ages ago, but since noone haven't bothered to do that
I haven't bothered to do more than to test that the kernel still builds
with those changes.
|
|
Contains also support for page coloring.
|
|
uvm_loan.
|
|
This time we're getting rid of KERN_* and VM_PAGER_* error codes and
use errnos instead.
|
|
code is written mostly by Chuck Silvers <chuq@chuq.com>/<chs@netbsd.org>.
Tested for the past few weeks by many developers, should be in a pretty stable
state, but will require optimizations and additional cleanups.
|
|
|
|
UBC, but prerequsites for it.
- Create a daemon that processes async I/O (swap and paging in the future)
requests that need processing in process context and that were processed
in the pagedaemon before.
- Convert some ugly ifdef DIAGNOSTIC code to less intrusive KASSERTs.
- misc other cleanups.
|
|
|
|
for the virtual address.
|
|
- Use malloc/free instead of MALLOC/FREE for variable sized allocations.
- Move the memory inheritance code to sys/mman.h and rename from VM_* to MAP_*
- various cleanups and simplifications.
|
|
The only thing left in vm/ are just dumb wrappers.
vm/vm.h includes uvm/uvm_extern.h
vm/pmap.h includes uvm/uvm_pmap.h
vm/vm_page.h includes uvm/uvm_page.h
|
|
|
|
|
|
|
|
Including support for zeroing pages in the idle loop (not enabled yet).
|
|
into objects.
Gives the possibilty to mmap beyond the size of vaddr_t.
From NetBSD.
|
|
amount of kmem_map on machines with lots of physical memory.
|
|
|
|
|
|
The archs that didn't have a proper PMAP_NEW now have a dummy implementation
with wrappers around the old functions.
|
|
1GB i386 machines needs this. The fix is heavily based on Jason Thorpe's
found in NetBSD. Here is his original commit message:
Instead of checking vm_physmem[<physseg>].pgs to determine if
uvm_page_init() has completed, add a boolean uvm.page_init_done,
and test against that. Use this same boolean (rather than
pmap_initialized) in pmap_growkernel() to determine if we are
being called via uvm_page_init() to grow the kernel address space.
This fixes a problem on some i386 configurations where pmap_init()
itself was needing to have the kernel page table grown, and since
pmap_initialized was not yet set to TRUE, pmap_growkernel() was
choosing the wrong code path.
|
|
|
|
|
|
wrapper, so this removes a dependence on the old VM system. From NetBSD.
art@ ok
|
|
Otherwise we can end up in a situation where the syncer waits for pages
and the pagedaemon waits for buffers.
|
|
With soft updates, writing out pages to disk can cause a bunch of allocations.
|
|
|
|
This is to match (make diffs smaller) the code in NetBSD.
new gcc inlines those functions, so this could also be a performance win.
|
|
large memory machines. This time I really hope we can continue quite a bit
away over the Gig.
|
|
- Introduce a new type of map that are interrupt safe and never allow faults
in them. mb_map and kmem_map are made intrsafe.
- Add "access protection" to uvm_vslock (to be passed down to uvm_fault and
later to pmap_enter).
- madvise(2) now works.
- various cleanups.
|
|
right uvm_map flags values, also fix the error ondition check.
couple of spaces vs tabs in the same code spot.
art@ ok
|
|
the access type that caused this mapping. This is to simplify pmaps
with mod/ref emulation (none for the moment) and in some cases speed
up pmap_is_{referenced,modified}.
At the same time, clean up some mappings that had too high protection.
XXX - the access type is incorrect in old vm, it's only used by uvm and MD code.
The actual use of this in pmap_enter implementations is not in this commit.
|
|
Mostly cleanups, but also a few improvements to pagedaemon for better
handling of low memory and/or low swap conditions.
|
|
|
|
|
|
|