summaryrefslogtreecommitdiff
path: root/sys/uvm/uvm_pdaemon.c
AgeCommit message (Collapse)Author
2021-12-15Use a per-UVM object lock to serialize the lower part of the fault handler.Martin Pieuchot
Like the per-amap lock the `vmobjlock' is principally used to serialized access to objects in the fault handler to allow faults occurring on different CPUs and different objects to be processed in parallel. The fault handler now acquires the `vmobjlock' of a given UVM object as soon as it finds one. For now a write-lock is always acquired even if some operations could use a read-lock. Every pager, corresponding to a different kind of UVM object, now expect the UVM object to be locked and some operations, like *_get() return it unlocked. This is enforced by assertions checking for rw_write_held(). The KERNEL_LOCK() is now pushed to the VFS boundary in the vnode pager. To ensure the correct amap or object lock is held when modifying a page many uvm_page* operations are now asserting for the "owner" lock. However, fields of the "struct vm_page" are still being protected by the global `pageqlock'. To prevent lock ordering issues with the new `vmobjlock' and to reduce differences with NetBSD this lock is now taken and released for each page instead of around the whole loop. This commit does not remove the KERNEL_LOCK/UNLOCK() dance. Unlocking will follow if there is no fallout. Ported from NetBSD, tested by many, thanks! ok kettenis@, kn@
2021-06-29remove arch ifdefs around drm.h includeJonathan Gray
ok deraadt@ kettenis@
2021-06-25basic radeondrm / X support for riscv64. Ok kettenis@Matthieu Herrb
- add wscons devices - build radeondrm and add MD uvm bits to support it.
2021-05-31call drmbackoff() on powerpc64 as wellJonathan Gray
ok kettenis@
2021-03-04Modify `uvmexp.swpgonly' atomically, required for uvm_fault() w/o KERNEL_LOCK()Martin Pieuchot
ok kettenis@
2021-03-01If an anon is associated with a page, acquire its lock before any modification.Martin Pieuchot
This change should have been part of the previous anon-locking diff and is necessary to run the top part of uvm_fault() unlocked. ok jmatthew@
2020-11-24Grab the `pageqlock' before calling uvm_pageclean() as intended.Martin Pieuchot
Document which global data structures require this lock and add some asserts where the lock should be held. Some code paths are still incorrect and should be revisited. ok jmatthew@
2020-09-29Introduce a helper to check if all available swap is in use.Martin Pieuchot
This reduces code duplication, reduces the diff with NetBSD and will help to introduce locks around global variables. ok cheloha@
2020-04-04Tweak the code that wakes up uvm_pmalloc sleepers in the page daemin.Mark Kettenis
Although there are open questions about whether we should flag failures with UVM_PMA_FAIL or not, we really should only wake up a sleeper if we unlink the pma. For now only do that if pages were actually freed in the requested region. Prompted by: CID 1453061 Logically dead code which should be fixed by this commit. ok (and together with) beck@
2019-12-30convert infinite msleep(9) to msleep_nsec(9)Jonathan Gray
ok mpi@
2019-12-25Hook up the shrinker for inteldrm(4). This is a "light" version that onlyMark Kettenis
drops graphics buffers that are cached and not in active use. Help from beck@ for pointing out how to hook this up to our pagedaemon. ok jsg@
2019-07-03Add tsleep_nsec(9), msleep_nsec(9), and rwsleep_nsec(9).cheloha
Equivalent to their unsuffixed counterparts except that (a) they take a timeout in terms of nanoseconds, and (b) INFSLP, aka UINT64_MAX (not zero) indicates that a timeout should not be set. For now, zero nanoseconds is not a strictly valid invocation: we log a warning on DIAGNOSTIC kernels if we see such a call. We still sleep until the next tick in such a case, however. In the future this could become some sort of poll... TBD. To facilitate conversions to these interfaces: add inline conversion functions to sys/time.h for turning your timeout into nanoseconds. Also do a few easy conversions for warmup and to demonstrate how further conversions should be done. Lots of input from mpi@ and ratchov@. Additional input from tedu@, deraadt@, mortimer@, millert@, and claudio@. Partly inspired by FreeBSD r247787. positive feedback from deraadt@, ok mpi@
2019-05-10simplify logic after wakeup since this variable is only manipulatedBob Beck
under lock ok guenther@
2019-05-10Check for nowait failed *after* the wakeup point, not before.Bob Beck
ok guenther@
2019-05-09Ensure that pagedaemon wakeups as a result of failed UVM_PLA_NOWAITBob Beck
allocations will recover some memory from the dma_constraint range. The allocation still fails, the intent is to ensure that the pagedaemon will free some memory to possibly allow a subsequent allocation to succeed. This also adds a UVM_PLA_NOWAKE flag to allow special cases in the buffer cache to not wake up the pagedaemon until they want to. ok kettenis@
2018-01-18While booting it does not make sense to wait for memory, there isAlexander Bluhm
no other process which could free it. Better panic in malloc(9) or pool_get(9) instead of sleeping forever. tested by visa@ patrick@ Jan Klemkow suggested by kettenis@; OK deraadt@
2017-02-14Convert most of the manual checks for CPU hogging to sched_pause().Martin Pieuchot
The distinction between preempt() and yield() stays as it is usueful to know if a thread decided to yield by itself or if the kernel told him to go away. ok tedu@, guenther@
2015-10-08Lock the page queues by turning uvm_lock_pageq() and uvm_unlock_pageq() intoMark Kettenis
mtx_enter() and mtx_leave() operations. Not 100% this won't blow up but there is only one way to find out, and we need this to make progress on further unlocking uvm. prodded by deraadt@
2015-08-21Remove the unused loan_count field and the related uvm logic. Most ofVisa Hankala
the page loaning code is already in the Attic. ok kettenis@, beck@
2014-12-17remove lock.h from uvm_extern.h. another holdover from the simpletonlockTed Unangst
era. fix uvm including c files to include lock.h or atomic.h as necessary. ok deraadt
2014-11-16Replace a plethora of historical protection options with justTheo de Raadt
PROT_NONE, PROT_READ, PROT_WRITE, and PROT_EXEC from mman.h. PROT_MASK is introduced as the one true way of extracting those bits. Remove UVM_ADV_* wrapper, using the standard names. ok doug guenther kettenis
2014-09-14remove uneeded proc.h includesJonathan Gray
ok mpi@ kspillner@
2014-09-09Make the cleaner, syncer, pagedaemon, aiodone daemons allBret Lambert
yield() if the cpu is marked SHOULDYIELD. ok miod@ tedu@ phessler@
2014-07-12Add a function to drop all clean pages on the page daemon queues and callMark Kettenis
it when we hibernate. ok mlarkin@, miod@, deraadt@
2014-07-11Chuck Cranor rescinded clauses in his licenseJonathan Gray
on the 2nd of February 2011 in NetBSD. http://marc.info/?l=netbsd-source-changes&m=129658899212732&w=2 http://marc.info/?l=netbsd-source-changes&m=129659095515558&w=2 http://marc.info/?l=netbsd-source-changes&m=129659157916514&w=2 http://marc.info/?l=netbsd-source-changes&m=129665962324372&w=2 http://marc.info/?l=netbsd-source-changes&m=129666033625342&w=2 http://marc.info/?l=netbsd-source-changes&m=129666052825545&w=2 http://marc.info/?l=netbsd-source-changes&m=129666922906480&w=2 http://marc.info/?l=netbsd-source-changes&m=129667725518082&w=2
2014-07-08subtle rearrangement of includesTheo de Raadt
2014-07-08bye bye UBC; ok beck dlgTheo de Raadt
2014-04-13compress code by turning four line comments into one line comments.Ted Unangst
emphatic ok usual suspects, grudging ok miod
2014-02-06add some more bufbackoff calls. uvm_wait optimistically (?), uvm_wait_plaTed Unangst
after analysis and testing. when flushing a large mmapped file, we can eat up all the reserve bufs, but there's a good chance there will be more clean ones available. ok beck kettenis
2014-02-06parenthesis to make the math right. ok beck kettenisTed Unangst
2013-05-30remove lots of comments about locking per beck's requestTed Unangst
2013-05-30remove simple_locks from uvm code. ok beck deraadtTed Unangst
2013-02-07make sure the page daemon considers BUFPAGES_INACT when decidingBob Beck
to do work, just as is done when waking it up. tested by me, phessler@, espie@, landry@ ok kettenis@
2012-12-10Always back the buffer cache off on any page daemon wakeup. This avoidsBob Beck
a few problems noticed by phessler@ and beck@ where certain allocations would repeatedly wake the page daemon even though the page daemon's targets were met already so it didn't do any work. We can avoid this problem when the buffer cache has pages to throw away by always doing so any time the page daemon is woken, rather than only when we are under the free page target. ok phessler@ deraadt@
2012-11-07Fix the buffer cache.Bob Beck
A long time ago (in vienna) the reserves for the cleaner and syncer were removed. softdep and many things have not performed ths same ever since. Follow on generations of buffer cache hackers assumed the exising code was the reference and have been in frustrating state of coprophagia ever since. This commit 0) Brings back a (small) reserve allotment of buffer pages, and the kva to map them, to allow the cleaner and syncer to run even when under intense memory or kva pressure. 1) Fixes a lot of comments and variables to represent reality. 2) Simplifies and corrects how the buffer cache backs off down to the lowest level. 3) Corrects how the page daemons asks the buffer cache to back off, ensuring that uvmpd_scan is done to recover inactive pages in low memory situaitons 4) Adds a high water mark to the pool used to allocate struct buf's 5) Correct the cleaner and the sleep/wakeup cases in both low memory and low kva situations. (including accounting for the cleaner/syncer reserve) Tested by many, with very much helpful input from deraadt, miod, tobiasu, kettenis and others. ok kettenis@ deraadt@ jj@
2011-07-06uvm changes for buffer cache improvements.Bob Beck
1) Make the pagedaemon aware of the memory ranges and size of allocations where memory is being requested, and pass this information on to bufbackoff(), which will later (not yet) be used to ensure that the buffer cache gets out of the way in the right area of memory. Note that this commit does not yet make it *do* that - as currently the buffer cache is all in dma-able memory and it will simply back off. 2) Add uvm_pagerealloc_multi - to be used by the buffer cache code for reallocating pages to particular regions. much of this work by ariane, with smatterings of me, art,and oga ok oga@, thib@, ariane@, deraadt@
2011-07-03Rip out and burn support for UVM_HIST.Owain Ainsworth
The vm hackers don't use it, don't maintain it and have to look at it all the time. About time this 800 lines of code hit /dev/null. ``never liked it'' tedu@. ariane@ was very happy when i told her i wrote this diff.
2011-04-01Typo in comment.Kenneth R Westerback
2010-09-26remove static so things show up in ddb.Thordur I. Bjornsson
ok miod@, oga@, tedu@
2009-10-14Fix buffer cache backoff in the page daemon - deal with inactive pages toBob Beck
more correctly reflect the new state of the world - that is - how many pages can be cheaply reclaimed - which now includes clean buffer cache pages. This change fixes situations where people would be running with a large bufcachepercent, and still notice swapping without the buffer cache backing off. ok oga@, testing by many on tech@ and others. Thanks.
2009-08-08fix the page daemon to back off the buffer cache correctly even in the caseBob Beck
where we are below the inactive page target. This fixes a problem with a large buffer cache on low memory machines where the the page daemon would woken up, however the buffer cache would never be backed off because we were below the inactive page target, which could result in constant paging and basically a livelock condition. ok oga@ art@
2009-08-02Dynamic buffer cache support - a re-commit of what was backed outBob Beck
after c2k9 allows buffer cache to be extended and grow/shrink dynamically tested by many, ok oga@, "why not just commit it" deraadt@
2009-07-22Put the PG_RELEASED changes diff back in.Owain Ainsworth
This has has been tested very very thoroughly on all archs we have excepting 88k and 68k. Please see cvs log for the individual commit messages. ok beck@, thib@
2009-06-26Fix a use after free in the pagedaemon.Owain Ainsworth
specifically, if we free a RELEASED anon, then we will first of all remove the page from the anon, free the anon, then get the next page relative to the anon page, then call uvm_pagefree(). The problem is that while we zero out anon->an_page, we do not zero out pg->uanon. Now, uvm_pagefree() if pg->uanon is not NULL zeroes out some variables in the struct for us. One of the backed out commits added more zeroing there which would have exacerbated this use after free under heavy paging (which was where we saw bugs). Fix this by zeroing out pg->uanon. I have looked for other similar cases, but have not found any as of yet. been in snaps a while, "please do commit that" deraadt@
2009-06-17date based reversion of uvm to the 4th May.Owain Ainsworth
More backouts in line with previous ones, this appears to bring us back to a stable condition. A machine forced to 64mb of ram cycled 10GB through swap with this diff and is still running as I type this. Other tests by ariane@ and thib@ also seem to show that it's alright. ok deraadt@, thib@, ariane@
2009-06-16Backout all changes to uvm after pmemrange (which will be backed outOwain Ainsworth
separately). a change at or just before the hackathon has either exposed or added a very very nasty memory corruption bug that is giving us hell right now. So in the interest of kernel stability these diffs are being backed out until such a time as that corruption bug has been found and squashed, then the ones that are proven good may slowly return. a quick hitlist of the main commits this backs out: mine: uvm_objwire the lock change in uvm_swap.c using trees for uvm objects instead of the hash removing the pgo_releasepg callback. art@'s: putting pmap_page_protect(VM_PROT_NONE) in uvm_pagedeactivate() since all callers called that just prior anyway. ok beck@, ariane@. prompted by deraadt@.
2009-06-15Back out all the buffer cache changes I committed during c2k9. This reverts ↵Bob Beck
three commits: 1) The sysctl allowing bufcachepercent to be changed at boot time. 2) The change moving the buffer cache hash chains to a red-black tree 3) The dynamic buffer cache (Which depended on the earlier too). ok on the backout from marco and todd
2009-06-06Somehow I missed comitting this.Artur Grabowski
2009-06-05Dynamic buffer cache sizing.Bob Beck
This commit won't change the default behaviour of the system unless the buffer cache size is increased with sysctl kern.bufcachepercent. By default our buffer cache is 10% of memory, which with this commit is now treated as a low water mark. If the buffer cache size is increased, the new size is treated as a high water mark and the buffer cache is permitted to grow to that percentage of memory. If the page daemon is invoked, the page daemon will ask the buffer cache to relenquish pages. if the buffer cache has more than the low water mark it will relenquish pages allowing them to be consumed by uvm. after a short period the buffer cache will attempt to re-grow back to the high water mark. This permits the use of a large buffer cache without penalizing the available memory for other purposes. Above the low water mark the buffer cache remains entirely subservient to the page daemon, so if uvm requires pages, the buffer cache will abandon them. ok art@ thib@ oga@
2009-06-01Since we've now cleared up a lot of the PG_RELEASED setting, remove theOwain Ainsworth
pgo_releasepg() hook and just free the page the "normal" way in the one place we'll ever see PG_RELEASED and should care (uvm_page_unbusy, called in aiodoned). ok art@, beck@, thib@