Age | Commit message (Collapse) | Author |
|
a few problems noticed by phessler@ and beck@ where certain allocations
would repeatedly wake the page daemon even though the page daemon's targets
were met already so it didn't do any work. We can avoid this problem when
the buffer cache has pages to throw away by always doing so any time
the page daemon is woken, rather than only when we are under the free
page target.
ok phessler@ deraadt@
|
|
pages, so fix non-sensical comparison introduced in rev 1.77.
ok miod@, krw@, beck@
|
|
A long time ago (in vienna) the reserves for the cleaner and syncer were
removed. softdep and many things have not performed ths same ever since.
Follow on generations of buffer cache hackers assumed the exising code
was the reference and have been in frustrating state of coprophagia ever
since.
This commit
0) Brings back a (small) reserve allotment of buffer pages, and the kva to
map them, to allow the cleaner and syncer to run even when under intense
memory or kva pressure.
1) Fixes a lot of comments and variables to represent reality.
2) Simplifies and corrects how the buffer cache backs off down to the lowest
level.
3) Corrects how the page daemons asks the buffer cache to back off, ensuring
that uvmpd_scan is done to recover inactive pages in low memory situaitons
4) Adds a high water mark to the pool used to allocate struct buf's
5) Correct the cleaner and the sleep/wakeup cases in both low memory and low
kva situations. (including accounting for the cleaner/syncer reserve)
Tested by many, with very much helpful input from deraadt, miod, tobiasu,
kettenis and others.
ok kettenis@ deraadt@ jj@
|
|
advantages. We shouln't do this. If the protection changes later on
(and VM_MAP_WIREFUTURE was set), uvm_map_protect() will wire them.
Found by Matthias Pitzl.
ok miod@ markus@
|
|
accounting for an hyperthetical miniroot filesystem in swap.
ok deraadt@
|
|
indicate that the kernel should fail with MAP_FAILED if the specified
address is not currently available instead of unmapping it.
Change ld.so on i386 to make use of __MAP_NOREMAP to improve
reliability.
__MAP_NOREMAP diff by guenther based on an earlier diff by Ariane;
ld.so bits by guenther and me
bulk build stress testing of earlier diffs by sthen
ok deraadt; committing now for further testing
|
|
|
|
swap region for hibernate.
|
|
that don't use hibernate
requested by and ok deraadt@
|
|
used by the hibernate code.
ok deraadt@
|
|
will be needed shortly for hibernate.
ok deraadt@
|
|
It's no longer required, code is stable.
ok kettenis@
|
|
ok ariane@
|
|
ok ariane@
|
|
uvm_addr.c r1.3.
ok deraadt, tedu
|
|
introduce any virtual cache aliasing problems. Fixes a regression introduced
by vmmap.
ok ariane@, jsing@
|
|
instead of crashing. Add a KASSERT() to catch other bugs that might
result in the tree iterators being reversed.
Problem observed by Tom Doherty (tomd at singlesecond.com)
ok deraadt@
|
|
anticipation of further changes to closef(). No binary change.
ok krw@ miod@ deraadt@
|
|
(That means the misplaced optimization is back in.) It broke mips and
possibly other architectures.
|
|
The optimization goes through great lengths to use less optimized code
paths in place of the simple path, where the latter is actually faster.
ok tedu, guenther
|
|
|
|
The fault path is used to update the maxrss of the faulting proc.
Doesn't affect anything, as it was 0 before.
Requested by espie, "just commit it" deraadt
|
|
Reduces O(n log n) allocations to O(log n).
ok deraadt, tedu
|
|
- Posix rules that a 0-byte mmap must return EINVAL
- our allocators are unable to distinguish between free memory and
0 bytes of allocated memory
|
|
of per-rthread. Handling of per-thread tick and runtime counters
inspired by how FreeBSD does it.
ok kettenis@
|
|
The
if (min < VMMAP_MIN_ADDR)
min = VMMAP_MIN_ADDR;
code should have moved across when the small_kernel diff moved the
initialization from uvm_map_setup() to uvm_map_setup_md().
Prevents a nasty panic on hppa, sparc64 (and possibly other archs).
kettenis: the diff make some sense to me
|
|
Has less special allocators on install media (where they aren't required
anyway).
Bonus: makes the vmmap initialization code easier to read.
|
|
no oks (it is really a pain to review properly)
extensively tested, I'm confident it'll be stable
'now is the time' from several icb inhabitants
Diff provides:
- ability to specify different allocators for different regions/maps
- a simpler implementation of the current allocator
- currently in compatibility mode: it will generate similar addresses
as the old allocator
|
|
Found by and original fix from Geoff Steckel.
While here, switch the assert that prevents this from happening from DEBUG to DIAGNOSTIC.
ok thib@, miod@
|
|
Among other things, this fixes early panics on hppa system which memory size
is exactly 128MB.
Found the hard way and reported by fries@, not reported by beck@
|
|
|
|
defined(__HAVE_VM_PAGE_MD), for convenience.
|
|
sys_osigaltstack() is 7 years old and no longer needed; all glory to
the sys_sigaltstack()!
sys_ogetdirentries() is about 9 months old, but still acceptable
within our release cycle; move from STD to COMPAT_48 to make this
clearer for tedu@ next year.
sys_sbrk() and sys_sstk() are completely obsolete: all they do is
return ENOSYS.
ok guenther@
|
|
No callers, no functional change.
|
|
This function will probably die before ever being called
from the in-tree code, since hibernate will move to RLE encoding.
No functional change, function had no callers.
|
|
This is so I can move the pig allocator to subr_hibernate.
No functional change.
|
|
back it out.
|
|
More and more things are allocating outside of uvm_pagealloc these days making
it easy for something like the buffer cache to eat your last page with no
repercussions (other than a hung machine, of course).
ok ariane@ also ok ariane@ again after I spotted and fixed a possible underflow
problem in the calculation.
|
|
changes to libevent and zlib headers sent to the upstream maintainers.
ok jmc@ (for typos), millert@
|
|
1) Make the pagedaemon aware of the memory ranges and size of allocations
where memory is being requested, and pass this information on to
bufbackoff(), which will later (not yet) be used to ensure that the
buffer cache gets out of the way in the right area of memory.
Note that this commit does not yet make it *do* that - as currently
the buffer cache is all in dma-able memory and it will simply back
off.
2) Add uvm_pagerealloc_multi - to be used by the buffer cache code
for reallocating pages to particular regions.
much of this work by ariane, with smatterings of me, art,and oga
ok oga@, thib@, ariane@, deraadt@
|
|
the extraction loop should stop.
No more 298 pages in 42 segments when asking for only 32 pages in 1 segment.
ok oga@
|
|
msync(size == 0) did strange things based on the original mapping
segments and trying to manipulate same. This code was copied from the
original vm when we moved to uvm.
posix says nothing about this behaviour and anything that depends on it is
systemically broken, so rip it out and make sys_msync about 30% smaller.
ok deraadt@, tedu@, guenther@.
|
|
|
|
|
|
now we can free vnodes again.
ok gcc@, jetpack@, beck@, art@.
(the results of this were hilarious)
|
|
``beat it'' tedu@ the deleteotron
|
|
The vm hackers don't use it, don't maintain it and have to look at it all the
time. About time this 800 lines of code hit /dev/null.
``never liked it'' tedu@. ariane@ was very happy when i told her i wrote
this diff.
|
|
They may die now.
``kill it'' thib@
|
|
prompted by tedu@
|
|
|