Age | Commit message (Collapse) | Author |
|
then move it up. previous revision would leave addr uninitialized.
pointed out by oga at nicotinebsd.org
|
|
allocations have been made, open it up. this is a variation on a previous
change that was lost in the great uvm map rewrite. allows some platforms,
notably i386, to fully utilize their address space.
|
|
Initial diff to replace unclear short variable name "nd" by
"nfs_diskless" and to display the real nfs path to swap in pstat -s by
deraadt@
Testing by me revealed diskless swap automount was broken since some
time. Fix this by passing and using the correct vnode in nfs_diskless
to swapmount().
Lots of input / help deraadt@, tweaks by deraadt@
OK deraadt@
|
|
|
|
ok tedu
|
|
machines where atomic ops aren't so simple.
ok beck deraadt miod
|
|
to keep definitions our of user space. The MD files now follow a consistant
order -- all namespace intrusion is at the tail can be cleaned up
independently. locore, bootblocks, and libkvm still see enough visibility to
build. Checked on 90% of platforms...
|
|
step 3 - re-merge 1.116 to 1.118
|
|
step 2 - re-merge 1.119 (the WAITOK diff)
|
|
step 1 - backout 1.116 to 1.119
|
|
This fix actually by mikeb@, this needs thorough testing to verify
it doesn't bring up other issues in what it hid.
ok deraadt@
|
|
Spotted by oga@nicotinebsd.org, with help from dhill@. Fix by me.
ok miod@
|
|
of times. No function change but makes the code a bit smaller.
ok mpi@
|
|
instead of two, building upon the knowledge of the state uvm_pagealloc_pg()
leaves the uvm_page in.
ok mpi@
|
|
deadlock ourselves - based on an infrequent hang caught by sthen, and
diagnosed by kettenis and me. Fix after some iterations is to simply
call uvm_map_allocate and allocate the map entry before grabbing the
lock so we don't wait while holding the lock.
ok miod@ kettenis@
|
|
It was removed as this function was redone to use pmemrange in mid 2010
with the result that kernel malloc and other users of this function can
consume the page daemon reserve and run us out of memory.
ok kettenis@
|
|
to do work, just as is done when waking it up.
tested by me, phessler@, espie@, landry@
ok kettenis@
|
|
when you hit it. Separate out these tests.
ok millert@ kettenis@, phessler@, with miod@ bikeshedding.
|
|
when uvm_wait gets hit from the pagedaemon. - code copied from uvm_wait.
ok guenther@, kettenis@
|
|
inside the loop. Fixes a.out coredumps for miod, solution from guenther.
ok miod
|
|
ok guenther
|
|
with kettenis guenther and beck.
ok guenther
|
|
yield between operations. Re-grab the vnode every operation,
so that multiple coredumps can be saved at the same time.
ok guenther beck etc
|
|
a few problems noticed by phessler@ and beck@ where certain allocations
would repeatedly wake the page daemon even though the page daemon's targets
were met already so it didn't do any work. We can avoid this problem when
the buffer cache has pages to throw away by always doing so any time
the page daemon is woken, rather than only when we are under the free
page target.
ok phessler@ deraadt@
|
|
pages, so fix non-sensical comparison introduced in rev 1.77.
ok miod@, krw@, beck@
|
|
A long time ago (in vienna) the reserves for the cleaner and syncer were
removed. softdep and many things have not performed ths same ever since.
Follow on generations of buffer cache hackers assumed the exising code
was the reference and have been in frustrating state of coprophagia ever
since.
This commit
0) Brings back a (small) reserve allotment of buffer pages, and the kva to
map them, to allow the cleaner and syncer to run even when under intense
memory or kva pressure.
1) Fixes a lot of comments and variables to represent reality.
2) Simplifies and corrects how the buffer cache backs off down to the lowest
level.
3) Corrects how the page daemons asks the buffer cache to back off, ensuring
that uvmpd_scan is done to recover inactive pages in low memory situaitons
4) Adds a high water mark to the pool used to allocate struct buf's
5) Correct the cleaner and the sleep/wakeup cases in both low memory and low
kva situations. (including accounting for the cleaner/syncer reserve)
Tested by many, with very much helpful input from deraadt, miod, tobiasu,
kettenis and others.
ok kettenis@ deraadt@ jj@
|
|
advantages. We shouln't do this. If the protection changes later on
(and VM_MAP_WIREFUTURE was set), uvm_map_protect() will wire them.
Found by Matthias Pitzl.
ok miod@ markus@
|
|
accounting for an hyperthetical miniroot filesystem in swap.
ok deraadt@
|
|
indicate that the kernel should fail with MAP_FAILED if the specified
address is not currently available instead of unmapping it.
Change ld.so on i386 to make use of __MAP_NOREMAP to improve
reliability.
__MAP_NOREMAP diff by guenther based on an earlier diff by Ariane;
ld.so bits by guenther and me
bulk build stress testing of earlier diffs by sthen
ok deraadt; committing now for further testing
|
|
|
|
swap region for hibernate.
|
|
that don't use hibernate
requested by and ok deraadt@
|
|
used by the hibernate code.
ok deraadt@
|
|
will be needed shortly for hibernate.
ok deraadt@
|
|
It's no longer required, code is stable.
ok kettenis@
|
|
ok ariane@
|
|
ok ariane@
|
|
uvm_addr.c r1.3.
ok deraadt, tedu
|
|
introduce any virtual cache aliasing problems. Fixes a regression introduced
by vmmap.
ok ariane@, jsing@
|
|
instead of crashing. Add a KASSERT() to catch other bugs that might
result in the tree iterators being reversed.
Problem observed by Tom Doherty (tomd at singlesecond.com)
ok deraadt@
|
|
anticipation of further changes to closef(). No binary change.
ok krw@ miod@ deraadt@
|
|
(That means the misplaced optimization is back in.) It broke mips and
possibly other architectures.
|
|
The optimization goes through great lengths to use less optimized code
paths in place of the simple path, where the latter is actually faster.
ok tedu, guenther
|
|
|
|
The fault path is used to update the maxrss of the faulting proc.
Doesn't affect anything, as it was 0 before.
Requested by espie, "just commit it" deraadt
|
|
Reduces O(n log n) allocations to O(log n).
ok deraadt, tedu
|
|
- Posix rules that a 0-byte mmap must return EINVAL
- our allocators are unable to distinguish between free memory and
0 bytes of allocated memory
|
|
of per-rthread. Handling of per-thread tick and runtime counters
inspired by how FreeBSD does it.
ok kettenis@
|
|
The
if (min < VMMAP_MIN_ADDR)
min = VMMAP_MIN_ADDR;
code should have moved across when the small_kernel diff moved the
initialization from uvm_map_setup() to uvm_map_setup_md().
Prevents a nasty panic on hppa, sparc64 (and possibly other archs).
kettenis: the diff make some sense to me
|
|
Has less special allocators on install media (where they aren't required
anyway).
Bonus: makes the vmmap initialization code easier to read.
|