Age | Commit message (Collapse) | Author |
|
doesn't have all the values and therefore can't be used everywhere.
ok deraadt@ kettenis@
|
|
eliminating the must-be-kept-in-sync UVM_INH_* macros
ok deraadt@ tedu@
|
|
PROT_NONE, PROT_READ, PROT_WRITE, and PROT_EXEC from mman.h.
PROT_MASK is introduced as the one true way of extracting those bits.
Remove UVM_ADV_* wrapper, using the standard names.
ok doug guenther kettenis
|
|
cause a SIGSEGV or SIGBUS when a mapped file gets truncated. Access to
pages that are not backed by a file on such a mapping will be replaced by
zero-filled anonymous pages. Makes passing file descriptors of mapped files
usable without having to play tricks with signal handlers.
"steal your mmap flag" deraadt@
|
|
|
|
on the 2nd of February 2011 in NetBSD.
http://marc.info/?l=netbsd-source-changes&m=129658899212732&w=2
http://marc.info/?l=netbsd-source-changes&m=129659095515558&w=2
http://marc.info/?l=netbsd-source-changes&m=129659157916514&w=2
http://marc.info/?l=netbsd-source-changes&m=129665962324372&w=2
http://marc.info/?l=netbsd-source-changes&m=129666033625342&w=2
http://marc.info/?l=netbsd-source-changes&m=129666052825545&w=2
http://marc.info/?l=netbsd-source-changes&m=129666922906480&w=2
http://marc.info/?l=netbsd-source-changes&m=129667725518082&w=2
|
|
|
|
wakeup and clearing the PG_BUSY and PG_WANTED flags, so try to keep those
bits as close together and defenitely avoid calling random code in between.
ok guenther@, tedu@
|
|
an offset/size/address by shifting by PAGE_SHIFT. Make uvm_objwrire/unwire
use voff_t instead of off_t. The former is the right type here even if it is
equivalent to the latter.
Inspired by a somewhat similar changes in Bitrig.
ok deraadt@, guenther@
|
|
emphatic ok usual suspects, grudging ok miod
|
|
which is the default, unless the fault call is explicitly used to wire a given
page.
The amount of pages being faulted in was borrowed from the FreeBSD VM code,
about 15 years ago, at a time FreeBSD was only reliably running on 4KB page
size systems.
It is questionable whether faulting the same amount of pages, on platforms
where the page size is larger, is a good idea, as it may cause too much I/O.
Add an uvmfault_init() routine, which will compute the proper number of pages
at runtime, depending upon the actual page size, and attempting to fault in
the same overall size the previous code would have done with 4KB pages.
ok tedu@
|
|
the pmap_update() to the end of the loop, rather than after each loop
iteration - which might not even end up invoking pmap_enter()!
Quiet blessing from guenther@ deraadt@
|
|
|
|
|
|
|
|
|
|
The fault path is used to update the maxrss of the faulting proc.
Doesn't affect anything, as it was 0 before.
Requested by espie, "just commit it" deraadt
|
|
of per-rthread. Handling of per-thread tick and runtime counters
inspired by how FreeBSD does it.
ok kettenis@
|
|
no oks (it is really a pain to review properly)
extensively tested, I'm confident it'll be stable
'now is the time' from several icb inhabitants
Diff provides:
- ability to specify different allocators for different regions/maps
- a simpler implementation of the current allocator
- currently in compatibility mode: it will generate similar addresses
as the old allocator
|
|
The vm hackers don't use it, don't maintain it and have to look at it all the
time. About time this 800 lines of code hit /dev/null.
``never liked it'' tedu@. ariane@ was very happy when i told her i wrote
this diff.
|
|
ok ariane@
|
|
outside the tree.
|
|
vmmap is designed to perform address space randomized allocations,
without letting fragmentation of the address space go through the roof.
Some highlights:
- kernel address space randomization
- proper implementation of guardpages
- roughly 10% system time reduction during kernel build
Tested by alot of people on tech@ and developers.
Theo's machines are still happy.
|
|
This has has been tested very very thoroughly on all archs we have
excepting 88k and 68k. Please see cvs log for the individual commit
messages.
ok beck@, thib@
|
|
We still have no idea why this stops the crashes. but it does.
a machine forced to 64mb of ram cycled 10GB through swap with this diff
and is still running as I type this. Other tests by ariane@ and thib@
also seem to show that it's alright.
ok deraadt@, thib@, ariane@
|
|
separately).
a change at or just before the hackathon has either exposed or added a
very very nasty memory corruption bug that is giving us hell right now.
So in the interest of kernel stability these diffs are being backed out
until such a time as that corruption bug has been found and squashed,
then the ones that are proven good may slowly return.
a quick hitlist of the main commits this backs out:
mine:
uvm_objwire
the lock change in uvm_swap.c
using trees for uvm objects instead of the hash
removing the pgo_releasepg callback.
art@'s:
putting pmap_page_protect(VM_PROT_NONE) in uvm_pagedeactivate() since
all callers called that just prior anyway.
ok beck@, ariane@.
prompted by deraadt@.
|
|
just move that into uvm_pagedeactivate.
oga@ ok
|
|
pgo_releasepg() hook and just free the page the "normal" way in the one
place we'll ever see PG_RELEASED and should care (uvm_page_unbusy,
called in aiodoned).
ok art@, beck@, thib@
|
|
Makes trace in ddb useful.
ok oga
|
|
By pseudo-inline, I mean that if a certain macro was defined, they would
be inlined. However, no architecture defines that, and none has for a
very very long time. Therefore mainly this just makes the code a damned
sight easier to read. Some k&r -> ansi declarations while I'm in there.
"just commit it" art@. ok weingart@.
|
|
K&R function declarations, so switch them all over to ansi-style, in
accordance with the prophesy.
"go for it" art@
|
|
Found by LLVM/Clang Static Analyzer.
ok miod@ art@
|
|
ckuethe@ for a while. Okay beck@, "it is good timing" deraadt@.
|
|
simple_{lock/unlock}.
ok art@
|
|
|
|
to separate locking, on most modern machines this is not enough
since operations on short types touch other short types that share the
same word in memory.
Merge pg_flags and pqflags again and now use atomic operations to change
the flags. Also bump wire_count to an int and pg_version might go
int as well, just for alignment.
tested by many, many. ok miod@
|
|
kmem_object) just so that we can remove them, just use pmap_extract
to get the pages to free and simplify a lot of code to not deal with
the list of intrsafe maps, intrsafe objects, etc.
miod@ ok
|
|
to "pg_flags" and "pg_version", so that they are a bit easier to work with.
Whoever uses generic names like this for a popular struct obviously doesn't
read much code.
Most architectures compile and there are no functionality changes.
deraadt@ ok ("if something fails to compile, we fix that by hand")
|
|
ok otto@
|
|
eyeballed by miod@ and pedro@
|
|
no change for normal code
|
|
miod@ ok
|
|
us did not see it or get a chance to test it before it was commited. It
broke cvs, in the ami driver, making it not succeed at seeing it's devices.
|
|
this results in lesse kva waste due to static preallocation of those
for every phys page and also every swap page.
tested by beck krw miod
|
|
kettenis@ tedu@ ok
|
|
14060 skip MADV_SEQUENTIAL if refaulting
18037 missing pageactivate
tested for some time by jolan krw
|
|
|
|
|
|
|
|
all architectures but arm, where it is needed.
|