diff options
author | Owain Ainsworth <oga@cvs.openbsd.org> | 2009-06-16 00:11:30 +0000 |
---|---|---|
committer | Owain Ainsworth <oga@cvs.openbsd.org> | 2009-06-16 00:11:30 +0000 |
commit | b20700966027364e7e2e3cf5ca4613cbb4e2a25b (patch) | |
tree | dac29c9a1582e023159a8aabe2282775b21cbdc2 /sys/uvm/uvm_pdaemon.c | |
parent | ab37797a62467132f94babf9bc9d57cef8402599 (diff) |
Backout all changes to uvm after pmemrange (which will be backed out
separately).
a change at or just before the hackathon has either exposed or added a
very very nasty memory corruption bug that is giving us hell right now.
So in the interest of kernel stability these diffs are being backed out
until such a time as that corruption bug has been found and squashed,
then the ones that are proven good may slowly return.
a quick hitlist of the main commits this backs out:
mine:
uvm_objwire
the lock change in uvm_swap.c
using trees for uvm objects instead of the hash
removing the pgo_releasepg callback.
art@'s:
putting pmap_page_protect(VM_PROT_NONE) in uvm_pagedeactivate() since
all callers called that just prior anyway.
ok beck@, ariane@.
prompted by deraadt@.
Diffstat (limited to 'sys/uvm/uvm_pdaemon.c')
-rw-r--r-- | sys/uvm/uvm_pdaemon.c | 42 |
1 files changed, 29 insertions, 13 deletions
diff --git a/sys/uvm/uvm_pdaemon.c b/sys/uvm/uvm_pdaemon.c index b30cf1e0a98..27cdc07ae73 100644 --- a/sys/uvm/uvm_pdaemon.c +++ b/sys/uvm/uvm_pdaemon.c @@ -1,4 +1,4 @@ -/* $OpenBSD: uvm_pdaemon.c,v 1.48 2009/06/15 17:01:26 beck Exp $ */ +/* $OpenBSD: uvm_pdaemon.c,v 1.49 2009/06/16 00:11:29 oga Exp $ */ /* $NetBSD: uvm_pdaemon.c,v 1.23 2000/08/20 10:24:14 bjh21 Exp $ */ /* @@ -820,20 +820,35 @@ uvmpd_scan_inactive(struct pglist *pglst) atomic_clearbits_int(&p->pg_flags, PG_BUSY|PG_WANTED); UVM_PAGE_OWN(p, NULL); - /* released during I/O? Can only happen for anons */ + /* released during I/O? */ if (p->pg_flags & PG_RELEASED) { - KASSERT(anon != NULL); - /* remove page so we can get nextpg */ - anon->an_page = NULL; + if (anon) { + /* remove page so we can get nextpg */ + anon->an_page = NULL; - simple_unlock(&anon->an_lock); - uvm_anfree(anon); /* kills anon */ - pmap_page_protect(p, VM_PROT_NONE); - anon = NULL; - uvm_lock_pageq(); - nextpg = TAILQ_NEXT(p, pageq); - /* free released page */ - uvm_pagefree(p); + simple_unlock(&anon->an_lock); + uvm_anfree(anon); /* kills anon */ + pmap_page_protect(p, VM_PROT_NONE); + anon = NULL; + uvm_lock_pageq(); + nextpg = TAILQ_NEXT(p, pageq); + /* free released page */ + uvm_pagefree(p); + + } else { + + /* + * pgo_releasepg nukes the page and + * gets "nextpg" for us. it returns + * with the page queues locked (when + * given nextpg ptr). + */ + + if (!uobj->pgops->pgo_releasepg(p, + &nextpg)) + /* uobj died after release */ + uobj = NULL; + } } else { /* page was not released during I/O */ uvm_lock_pageq(); nextpg = TAILQ_NEXT(p, pageq); @@ -1042,6 +1057,7 @@ uvmpd_scan(void) */ if (inactive_shortage > 0) { + pmap_page_protect(p, VM_PROT_NONE); /* no need to check wire_count as pg is "active" */ uvm_pagedeactivate(p); uvmexp.pddeact++; |