summaryrefslogtreecommitdiff
path: root/sys/uvm
AgeCommit message (Collapse)Author
2013-11-02fix some commentsTheo de Raadt
2013-11-02No need to cast constants or simple variables to (daddr_t). UseKenneth R Westerback
(u_int64_t) instead of (daddr_t) when casting a variable in an expression passed to DL_SETDSIZE(). Change a variable counting open files from daddr_t to int64_t. ok deraadt@ with the tweak to fix that pesky expression.
2013-09-21Don't invoke pmap_copy() on map holes.Miod Vallat
2013-08-13Make the tree compile again on architectures without drm(4).Mark Kettenis
ok maja@, miod@, jsg@
2013-07-09back out the cache flipper temporarily to work out of tree.Bob Beck
will come back soon. ok deraadt@
2013-06-21Buffer cache pages are wired but not counted as such. Therefore we have toMark Kettenis
set the wire count on the pages to 0 before we call uvm_pagefree() on them, just like we do in buf_free_pages(). Otherwise the wired pages counter goes negative. While there, also sprinkle some KASSERTs in there that buf_free_pages() has as well. ok beck@
2013-06-11High memory page flipping for the buffer cache.Bob Beck
This change splits the buffer cache free lists into lists of dma reachable buffers and high memory buffers based on the ranges returned by pmemrange. Buffers move from dma to high memory as they age, but are flipped to dma reachable memory if IO is needed to/from and high mem buffer. The total amount of buffers allocated is now bufcachepercent of both the dma and the high memory region. This change allows the use of large buffer caches on amd64 using more than 4 GB of memory ok tedu@ krw@ - testing by many.
2013-06-11final removal of daddr64_t. daddr_t has been 64 bit for a long enoughTheo de Raadt
test period; i think 3 years ago the last bugs fell out. ok otto beck others
2013-06-07Add proper mmap(2) support for drm(4)/inteldrm(4). This changes theMark Kettenis
DRM_I915_GEM_MMAP and DRM_I915_GEM_MMAP_GTT ioctls to be compatible with Linux. This also is the first step that moves us away from accessing all graphics memory through the GTT, which should make things faster. ok tedu@ (for the uvm bits)
2013-05-30in the brave new world of void *, we don't need caddr_t castsTed Unangst
2013-05-30UVM_UNLOCK_AND_WAIT no longer unlocks, so rename it to UVM_WAIT.Ted Unangst
2013-05-30remove lots of comments about locking per beck's requestTed Unangst
2013-05-30remove simple_locks from uvm code. ok beck deraadtTed Unangst
2013-05-29uvm_loan has not (ever) been compiled or used.Ted Unangst
2013-05-23the simplelock is a lieTed Unangst
2013-05-14restore ABI compatibility; guentherMiod Vallat
2013-05-14Remove `swapin' and `swapout' from uvm statistics, since we don't swap outMiod Vallat
u areas since quite a few years now.
2013-05-03fix mem leak in swapmountFlorian Obser
pointed out by jsg@ ok tedu@
2013-04-17it is better if we always start addr at something reasonable, andTed Unangst
then move it up. previous revision would leave addr uninitialized. pointed out by oga at nicotinebsd.org
2013-04-17do not permanently avoid the BRKSIZ gap in the heap for mmap. after someTed Unangst
allocations have been made, open it up. this is a variation on a previous change that was lost in the great uvm map rewrite. allows some platforms, notably i386, to fully utilize their address space.
2013-04-17Unbreak and cleanup diskless swap automount.Florian Obser
Initial diff to replace unclear short variable name "nd" by "nfs_diskless" and to display the real nfs path to swap in pstat -s by deraadt@ Testing by me revealed diskless swap automount was broken since some time. Fix this by passing and using the correct vnode in nfs_diskless to swapmount(). Lots of input / help deraadt@, tweaks by deraadt@ OK deraadt@
2013-03-31do not need machine/cpu.h directlyTheo de Raadt
2013-03-28do not copy additional kernel memory into the swapent.se_path[]Theo de Raadt
ok tedu
2013-03-27combine several atomic_clearbits calls into one. slightly faster onTed Unangst
machines where atomic ops aren't so simple. ok beck deraadt miod
2013-03-23refactor sys/param.h and machine/param.h. A lot of #ifdef _KERNEL is addedTheo de Raadt
to keep definitions our of user space. The MD files now follow a consistant order -- all namespace intrusion is at the tail can be cleaned up independently. locore, bootblocks, and libkvm still see enough visibility to build. Checked on 90% of platforms...
2013-03-12preserving main-branch topology for a perverse reason:Theo de Raadt
step 3 - re-merge 1.116 to 1.118
2013-03-12preserving main-branch topology for a perverse reason:Theo de Raadt
step 2 - re-merge 1.119 (the WAITOK diff)
2013-03-12preserving main-branch topology for a perverse reason:Theo de Raadt
step 1 - backout 1.116 to 1.119
2013-03-12Fix horrible typo of mine checking for WAITOK flags, found by sthen.Bob Beck
This fix actually by mikeb@, this needs thorough testing to verify it doesn't bring up other issues in what it hid. ok deraadt@
2013-03-06Account for the size of the allocation when defending the pagedaemon reserve.Bob Beck
Spotted by oga@nicotinebsd.org, with help from dhill@. Fix by me. ok miod@
2013-03-03Use local vm_physseg pointers instead of compting vm_physmem[index] gazillionsMiod Vallat
of times. No function change but makes the code a bit smaller. ok mpi@
2013-03-02Simplify uvm_pagealloc() to only need one atomic operation on the page flagsMiod Vallat
instead of two, building upon the knowledge of the state uvm_pagealloc_pg() leaves the uvm_page in. ok mpi@
2013-02-10Don't wait for memory from pool while holding vm_map_lock or we canBob Beck
deadlock ourselves - based on an infrequent hang caught by sthen, and diagnosed by kettenis and me. Fix after some iterations is to simply call uvm_map_allocate and allocate the map entry before grabbing the lock so we don't wait while holding the lock. ok miod@ kettenis@
2013-02-07Bring back reserve enforcement and page daemon wakeup into uvm_pglistalloc,Bob Beck
It was removed as this function was redone to use pmemrange in mid 2010 with the result that kernel malloc and other users of this function can consume the page daemon reserve and run us out of memory. ok kettenis@
2013-02-07make sure the page daemon considers BUFPAGES_INACT when decidingBob Beck
to do work, just as is done when waking it up. tested by me, phessler@, espie@, landry@ ok kettenis@
2013-01-297 &&'ed elements in a single KASSERT involving complex tests is just painfulBob Beck
when you hit it. Separate out these tests. ok millert@ kettenis@, phessler@, with miod@ bikeshedding.
2013-01-21Stop hiding when this is failing - make this as obvious as it isBob Beck
when uvm_wait gets hit from the pagedaemon. - code copied from uvm_wait. ok guenther@, kettenis@
2013-01-16in uvm_coredump, use RB_FOREACH_SAFE because we are torturing the mapTheo de Raadt
inside the loop. Fixes a.out coredumps for miod, solution from guenther. ok miod
2013-01-16oops, one IO_NODELOCKED left behind in the a.out coredumperTheo de Raadt
ok guenther
2013-01-15Allow SIGKILL to terminate coredumping processes. Semantics decidedTheo de Raadt
with kettenis guenther and beck. ok guenther
2013-01-15Slice & dice coredump write requests into MAXPHYS blocks, andTheo de Raadt
yield between operations. Re-grab the vnode every operation, so that multiple coredumps can be saved at the same time. ok guenther beck etc
2012-12-10Always back the buffer cache off on any page daemon wakeup. This avoidsBob Beck
a few problems noticed by phessler@ and beck@ where certain allocations would repeatedly wake the page daemon even though the page daemon's targets were met already so it didn't do any work. We can avoid this problem when the buffer cache has pages to throw away by always doing so any time the page daemon is woken, rather than only when we are under the free page target. ok phessler@ deraadt@
2012-11-10Number of swap pages in use must be smaller than tha total number of swapMark Kettenis
pages, so fix non-sensical comparison introduced in rev 1.77. ok miod@, krw@, beck@
2012-11-07Fix the buffer cache.Bob Beck
A long time ago (in vienna) the reserves for the cleaner and syncer were removed. softdep and many things have not performed ths same ever since. Follow on generations of buffer cache hackers assumed the exising code was the reference and have been in frustrating state of coprophagia ever since. This commit 0) Brings back a (small) reserve allotment of buffer pages, and the kva to map them, to allow the cleaner and syncer to run even when under intense memory or kva pressure. 1) Fixes a lot of comments and variables to represent reality. 2) Simplifies and corrects how the buffer cache backs off down to the lowest level. 3) Corrects how the page daemons asks the buffer cache to back off, ensuring that uvmpd_scan is done to recover inactive pages in low memory situaitons 4) Adds a high water mark to the pool used to allocate struct buf's 5) Correct the cleaner and the sleep/wakeup cases in both low memory and low kva situations. (including accounting for the cleaner/syncer reserve) Tested by many, with very much helpful input from deraadt, miod, tobiasu, kettenis and others. ok kettenis@ deraadt@ jj@
2012-10-18Wiring map entries with VM_PROT_NONE only waists RAM and bears noGerhard Roth
advantages. We shouln't do this. If the protection changes later on (and VM_MAP_WIREFUTURE was set), uvm_map_protect() will wire them. Found by Matthias Pitzl. ok miod@ markus@
2012-09-20Now that none of our installation media runs off the swap area, don't botherMiod Vallat
accounting for an hyperthetical miniroot filesystem in swap. ok deraadt@
2012-07-21Add a new mmap(2) flag __MAP_NOREMAP for use with MAP_FIXED toMatthew Dempsky
indicate that the kernel should fail with MAP_FAILED if the specified address is not currently available instead of unmapping it. Change ld.so on i386 to make use of __MAP_NOREMAP to improve reliability. __MAP_NOREMAP diff by guenther based on an earlier diff by Ariane; ld.so bits by guenther and me bulk build stress testing of earlier diffs by sthen ok deraadt; committing now for further testing
2012-07-18comment typo; s/lineair/linear/Matthew Dempsky
2012-07-12Three cases that should be failures, not successes when checking for availMike Larkin
swap region for hibernate.
2012-07-11#ifdef the uvm swap checker fn for hibernate only, to save space in kernelsMike Larkin
that don't use hibernate requested by and ok deraadt@