Age | Commit message (Collapse) | Author |
|
anticipation of further changes to closef(). No binary change.
ok krw@ miod@ deraadt@
|
|
(That means the misplaced optimization is back in.) It broke mips and
possibly other architectures.
|
|
The optimization goes through great lengths to use less optimized code
paths in place of the simple path, where the latter is actually faster.
ok tedu, guenther
|
|
|
|
The fault path is used to update the maxrss of the faulting proc.
Doesn't affect anything, as it was 0 before.
Requested by espie, "just commit it" deraadt
|
|
Reduces O(n log n) allocations to O(log n).
ok deraadt, tedu
|
|
- Posix rules that a 0-byte mmap must return EINVAL
- our allocators are unable to distinguish between free memory and
0 bytes of allocated memory
|
|
of per-rthread. Handling of per-thread tick and runtime counters
inspired by how FreeBSD does it.
ok kettenis@
|
|
The
if (min < VMMAP_MIN_ADDR)
min = VMMAP_MIN_ADDR;
code should have moved across when the small_kernel diff moved the
initialization from uvm_map_setup() to uvm_map_setup_md().
Prevents a nasty panic on hppa, sparc64 (and possibly other archs).
kettenis: the diff make some sense to me
|
|
Has less special allocators on install media (where they aren't required
anyway).
Bonus: makes the vmmap initialization code easier to read.
|
|
no oks (it is really a pain to review properly)
extensively tested, I'm confident it'll be stable
'now is the time' from several icb inhabitants
Diff provides:
- ability to specify different allocators for different regions/maps
- a simpler implementation of the current allocator
- currently in compatibility mode: it will generate similar addresses
as the old allocator
|
|
Found by and original fix from Geoff Steckel.
While here, switch the assert that prevents this from happening from DEBUG to DIAGNOSTIC.
ok thib@, miod@
|
|
Among other things, this fixes early panics on hppa system which memory size
is exactly 128MB.
Found the hard way and reported by fries@, not reported by beck@
|
|
|
|
defined(__HAVE_VM_PAGE_MD), for convenience.
|
|
sys_osigaltstack() is 7 years old and no longer needed; all glory to
the sys_sigaltstack()!
sys_ogetdirentries() is about 9 months old, but still acceptable
within our release cycle; move from STD to COMPAT_48 to make this
clearer for tedu@ next year.
sys_sbrk() and sys_sstk() are completely obsolete: all they do is
return ENOSYS.
ok guenther@
|
|
No callers, no functional change.
|
|
This function will probably die before ever being called
from the in-tree code, since hibernate will move to RLE encoding.
No functional change, function had no callers.
|
|
This is so I can move the pig allocator to subr_hibernate.
No functional change.
|
|
back it out.
|
|
More and more things are allocating outside of uvm_pagealloc these days making
it easy for something like the buffer cache to eat your last page with no
repercussions (other than a hung machine, of course).
ok ariane@ also ok ariane@ again after I spotted and fixed a possible underflow
problem in the calculation.
|
|
changes to libevent and zlib headers sent to the upstream maintainers.
ok jmc@ (for typos), millert@
|
|
1) Make the pagedaemon aware of the memory ranges and size of allocations
where memory is being requested, and pass this information on to
bufbackoff(), which will later (not yet) be used to ensure that the
buffer cache gets out of the way in the right area of memory.
Note that this commit does not yet make it *do* that - as currently
the buffer cache is all in dma-able memory and it will simply back
off.
2) Add uvm_pagerealloc_multi - to be used by the buffer cache code
for reallocating pages to particular regions.
much of this work by ariane, with smatterings of me, art,and oga
ok oga@, thib@, ariane@, deraadt@
|
|
the extraction loop should stop.
No more 298 pages in 42 segments when asking for only 32 pages in 1 segment.
ok oga@
|
|
msync(size == 0) did strange things based on the original mapping
segments and trying to manipulate same. This code was copied from the
original vm when we moved to uvm.
posix says nothing about this behaviour and anything that depends on it is
systemically broken, so rip it out and make sys_msync about 30% smaller.
ok deraadt@, tedu@, guenther@.
|
|
|
|
|
|
now we can free vnodes again.
ok gcc@, jetpack@, beck@, art@.
(the results of this were hilarious)
|
|
``beat it'' tedu@ the deleteotron
|
|
The vm hackers don't use it, don't maintain it and have to look at it all the
time. About time this 800 lines of code hit /dev/null.
``never liked it'' tedu@. ariane@ was very happy when i told her i wrote
this diff.
|
|
They may die now.
``kill it'' thib@
|
|
prompted by tedu@
|
|
|
|
This is no function change since aobjs never actually hit this path. (also it is
my bug from a while ago).
ok ariane@
|
|
It will handle an empty list just fine (there's a small optimisation
possible here to avoid grabbing the fpageqlock if no pages need freeing,
but that is definitely another diff)
ok ariane@
|
|
ok ariane@
|
|
college uvm_pglist.c
uvm_pglistalloc and free are just thin wrappers around pmemrange these
days and don't really need their own file.
ok ariane@
|
|
Requested by dlg@
ok oga@
|
|
ok beck@
|
|
deactivate pages after syncing.
While here, don't check flags for PQ_INACTIVE (this is the only place
outside uvm_page.c where this is done) because pagedeactivate does this
already.
First part from Christian Ehrhart, second from me.
Both ok ariane@.
I meant to commit this about a week ago, but accidentally commited to my
local cvs mirror then forgot about it.
|
|
outside the tree.
|
|
a) chooses incorrect kernel memory on the macppc
b) perhaps on zaurus too, which does not make it to copyright
c) was not tested on those platforms before commit
|
|
sel_addr &= ~(pmap_align - 1);
with pmap_align allowed to be 0 (no PMAP_PREFER) is a bad idea.
Fix this by a conditional.
ok oga@
|
|
The new world order of pmemrange makes this data completely redundant
(being dealt with by the pmemrange constraints instead). Remove all code
that messes with the freelist.
While touching every caller of uvm_page_physload() anyway, add the flags
argument to all callers (all but one is 0 and that one already used
PHYSLOAD_DEVICE) and remove the macro magic to allow callers to continue
without it.
Should shrink the code a bit, as well.
matthew@ pointed out some mistakes i'd made.
``freelist death, I like. Ok.' ariane@
`I agree with the general direction, go ahead and i'll fix any fallout
shortly'' miod@ (68k 88k and vax i could not check would build)
|
|
;
instead of
for (some; stuff; here);
reads easier.
ok ariane@
|
|
ok ariane@
|
|
This makes writing a diff that makes 64-bit unclean applications explode
a one-line diff.
ok deraadt
|
|
The old VM_MAP_RANGE_CHECK macro was wrong and caused code to be unreadable
(argument altering macros are harmful).
Each function now treats the memory range outside the map as it would treat
free memory: if it would error on being given free memory, it'll error
in a similar fashion when the start,end parameters fall outside the map.
If it would accept free memory in its argument range, it'll silently accept
the outside-map memory too.
Confirmed to help ports build machines.
|
|
vmmap is designed to perform address space randomized allocations,
without letting fragmentation of the address space go through the roof.
Some highlights:
- kernel address space randomization
- proper implementation of guardpages
- roughly 10% system time reduction during kernel build
Tested by alot of people on tech@ and developers.
Theo's machines are still happy.
|
|
Before we were only calling uao_dropswap() if there was a page, maning
that if the buffer was swapped out then we would leak the slot.
Quite rare because only pipebuffers should swap from the kernel object,
but i've seen panics that implied this had happened (alpha.p for example).
ok thib@ after a lot of discussion and checking the semantics.
|