summaryrefslogtreecommitdiff
path: root/sys/uvm
AgeCommit message (Collapse)Author
2013-04-17it is better if we always start addr at something reasonable, andTed Unangst
then move it up. previous revision would leave addr uninitialized. pointed out by oga at nicotinebsd.org
2013-04-17do not permanently avoid the BRKSIZ gap in the heap for mmap. after someTed Unangst
allocations have been made, open it up. this is a variation on a previous change that was lost in the great uvm map rewrite. allows some platforms, notably i386, to fully utilize their address space.
2013-04-17Unbreak and cleanup diskless swap automount.Florian Obser
Initial diff to replace unclear short variable name "nd" by "nfs_diskless" and to display the real nfs path to swap in pstat -s by deraadt@ Testing by me revealed diskless swap automount was broken since some time. Fix this by passing and using the correct vnode in nfs_diskless to swapmount(). Lots of input / help deraadt@, tweaks by deraadt@ OK deraadt@
2013-03-31do not need machine/cpu.h directlyTheo de Raadt
2013-03-28do not copy additional kernel memory into the swapent.se_path[]Theo de Raadt
ok tedu
2013-03-27combine several atomic_clearbits calls into one. slightly faster onTed Unangst
machines where atomic ops aren't so simple. ok beck deraadt miod
2013-03-23refactor sys/param.h and machine/param.h. A lot of #ifdef _KERNEL is addedTheo de Raadt
to keep definitions our of user space. The MD files now follow a consistant order -- all namespace intrusion is at the tail can be cleaned up independently. locore, bootblocks, and libkvm still see enough visibility to build. Checked on 90% of platforms...
2013-03-12preserving main-branch topology for a perverse reason:Theo de Raadt
step 3 - re-merge 1.116 to 1.118
2013-03-12preserving main-branch topology for a perverse reason:Theo de Raadt
step 2 - re-merge 1.119 (the WAITOK diff)
2013-03-12preserving main-branch topology for a perverse reason:Theo de Raadt
step 1 - backout 1.116 to 1.119
2013-03-12Fix horrible typo of mine checking for WAITOK flags, found by sthen.Bob Beck
This fix actually by mikeb@, this needs thorough testing to verify it doesn't bring up other issues in what it hid. ok deraadt@
2013-03-06Account for the size of the allocation when defending the pagedaemon reserve.Bob Beck
Spotted by oga@nicotinebsd.org, with help from dhill@. Fix by me. ok miod@
2013-03-03Use local vm_physseg pointers instead of compting vm_physmem[index] gazillionsMiod Vallat
of times. No function change but makes the code a bit smaller. ok mpi@
2013-03-02Simplify uvm_pagealloc() to only need one atomic operation on the page flagsMiod Vallat
instead of two, building upon the knowledge of the state uvm_pagealloc_pg() leaves the uvm_page in. ok mpi@
2013-02-10Don't wait for memory from pool while holding vm_map_lock or we canBob Beck
deadlock ourselves - based on an infrequent hang caught by sthen, and diagnosed by kettenis and me. Fix after some iterations is to simply call uvm_map_allocate and allocate the map entry before grabbing the lock so we don't wait while holding the lock. ok miod@ kettenis@
2013-02-07Bring back reserve enforcement and page daemon wakeup into uvm_pglistalloc,Bob Beck
It was removed as this function was redone to use pmemrange in mid 2010 with the result that kernel malloc and other users of this function can consume the page daemon reserve and run us out of memory. ok kettenis@
2013-02-07make sure the page daemon considers BUFPAGES_INACT when decidingBob Beck
to do work, just as is done when waking it up. tested by me, phessler@, espie@, landry@ ok kettenis@
2013-01-297 &&'ed elements in a single KASSERT involving complex tests is just painfulBob Beck
when you hit it. Separate out these tests. ok millert@ kettenis@, phessler@, with miod@ bikeshedding.
2013-01-21Stop hiding when this is failing - make this as obvious as it isBob Beck
when uvm_wait gets hit from the pagedaemon. - code copied from uvm_wait. ok guenther@, kettenis@
2013-01-16in uvm_coredump, use RB_FOREACH_SAFE because we are torturing the mapTheo de Raadt
inside the loop. Fixes a.out coredumps for miod, solution from guenther. ok miod
2013-01-16oops, one IO_NODELOCKED left behind in the a.out coredumperTheo de Raadt
ok guenther
2013-01-15Allow SIGKILL to terminate coredumping processes. Semantics decidedTheo de Raadt
with kettenis guenther and beck. ok guenther
2013-01-15Slice & dice coredump write requests into MAXPHYS blocks, andTheo de Raadt
yield between operations. Re-grab the vnode every operation, so that multiple coredumps can be saved at the same time. ok guenther beck etc
2012-12-10Always back the buffer cache off on any page daemon wakeup. This avoidsBob Beck
a few problems noticed by phessler@ and beck@ where certain allocations would repeatedly wake the page daemon even though the page daemon's targets were met already so it didn't do any work. We can avoid this problem when the buffer cache has pages to throw away by always doing so any time the page daemon is woken, rather than only when we are under the free page target. ok phessler@ deraadt@
2012-11-10Number of swap pages in use must be smaller than tha total number of swapMark Kettenis
pages, so fix non-sensical comparison introduced in rev 1.77. ok miod@, krw@, beck@
2012-11-07Fix the buffer cache.Bob Beck
A long time ago (in vienna) the reserves for the cleaner and syncer were removed. softdep and many things have not performed ths same ever since. Follow on generations of buffer cache hackers assumed the exising code was the reference and have been in frustrating state of coprophagia ever since. This commit 0) Brings back a (small) reserve allotment of buffer pages, and the kva to map them, to allow the cleaner and syncer to run even when under intense memory or kva pressure. 1) Fixes a lot of comments and variables to represent reality. 2) Simplifies and corrects how the buffer cache backs off down to the lowest level. 3) Corrects how the page daemons asks the buffer cache to back off, ensuring that uvmpd_scan is done to recover inactive pages in low memory situaitons 4) Adds a high water mark to the pool used to allocate struct buf's 5) Correct the cleaner and the sleep/wakeup cases in both low memory and low kva situations. (including accounting for the cleaner/syncer reserve) Tested by many, with very much helpful input from deraadt, miod, tobiasu, kettenis and others. ok kettenis@ deraadt@ jj@
2012-10-18Wiring map entries with VM_PROT_NONE only waists RAM and bears noGerhard Roth
advantages. We shouln't do this. If the protection changes later on (and VM_MAP_WIREFUTURE was set), uvm_map_protect() will wire them. Found by Matthias Pitzl. ok miod@ markus@
2012-09-20Now that none of our installation media runs off the swap area, don't botherMiod Vallat
accounting for an hyperthetical miniroot filesystem in swap. ok deraadt@
2012-07-21Add a new mmap(2) flag __MAP_NOREMAP for use with MAP_FIXED toMatthew Dempsky
indicate that the kernel should fail with MAP_FAILED if the specified address is not currently available instead of unmapping it. Change ld.so on i386 to make use of __MAP_NOREMAP to improve reliability. __MAP_NOREMAP diff by guenther based on an earlier diff by Ariane; ld.so bits by guenther and me bulk build stress testing of earlier diffs by sthen ok deraadt; committing now for further testing
2012-07-18comment typo; s/lineair/linear/Matthew Dempsky
2012-07-12Three cases that should be failures, not successes when checking for availMike Larkin
swap region for hibernate.
2012-07-11#ifdef the uvm swap checker fn for hibernate only, to save space in kernelsMike Larkin
that don't use hibernate requested by and ok deraadt@
2012-07-11add a check for the total size of swap, abort if too small.Mike Larkin
used by the hibernate code. ok deraadt@
2012-07-11add uvm_swap_check_range to scan for contig free space at end of swap.Mike Larkin
will be needed shortly for hibernate. ok deraadt@
2012-06-14Remove uvm_km_kmem_grow printf.Ariane van der Steldt
It's no longer required, code is stable. ok kettenis@
2012-06-14whitespace cleanupJasper Lievisse Adriaanse
ok ariane@
2012-06-14fix typo in commentJasper Lievisse Adriaanse
ok ariane@
2012-06-06Fix address-space randomization that was accidentally disabled inMatthew Dempsky
uvm_addr.c r1.3. ok deraadt, tedu
2012-06-03Make sure uvm_map_extract() entesr mappings at an address that doesn'tMark Kettenis
introduce any virtual cache aliasing problems. Fixes a regression introduced by vmmap. ok ariane@, jsing@
2012-06-01Correct handling of mlock()/munlock() with len==0 to return successPhilip Guenthe
instead of crashing. Add a KASSERT() to catch other bugs that might result in the tree iterators being reversed. Problem observed by Tom Doherty (tomd at singlesecond.com) ok deraadt@
2012-04-22Add struct proc * argument to FRELE() and FILE_SET_MATURE() inPhilip Guenthe
anticipation of further changes to closef(). No binary change. ok krw@ miod@ deraadt@
2012-04-19Backout misplaced optimization in vmmap.Ariane van der Steldt
(That means the misplaced optimization is back in.) It broke mips and possibly other architectures.
2012-04-17uvmspace_exec: Remove disfunctional "optimization".Ariane van der Steldt
The optimization goes through great lengths to use less optimized code paths in place of the simple path, where the latter is actually faster. ok tedu, guenther
2012-04-12Remove dead UBC codeAriane van der Steldt
2012-04-12uvm: keep track of maxrssAriane van der Steldt
The fault path is used to update the maxrss of the faulting proc. Doesn't affect anything, as it was 0 before. Requested by espie, "just commit it" deraadt
2012-04-11vmmap: speed up allocationsAriane van der Steldt
Reduces O(n log n) allocations to O(log n). ok deraadt, tedu
2012-04-10Return EINVAL on 0-byte mmap invocation.Ariane van der Steldt
- Posix rules that a 0-byte mmap must return EINVAL - our allocators are unable to distinguish between free memory and 0 bytes of allocated memory
2012-03-23Make rusage totals, itimers, and profile settings per-process insteadPhilip Guenthe
of per-rthread. Handling of per-thread tick and runtime counters inspired by how FreeBSD does it. ok kettenis@
2012-03-15Fix vmmap SMALL_KERNEL introduced bug.Ariane van der Steldt
The if (min < VMMAP_MIN_ADDR) min = VMMAP_MIN_ADDR; code should have moved across when the small_kernel diff moved the initialization from uvm_map_setup() to uvm_map_setup_md(). Prevents a nasty panic on hppa, sparc64 (and possibly other archs). kettenis: the diff make some sense to me
2012-03-15Reduce installmedia pressure from new vmmap.Ariane van der Steldt
Has less special allocators on install media (where they aren't required anyway). Bonus: makes the vmmap initialization code easier to read.