Age | Commit message (Collapse) | Author |
|
ok krw@ sthen@
|
|
way we can do some useful kernel lock in parallel with other things and create
a reservoir of zeroed pages ready for use elsewhere. This should reduce
latency. The thread runs at the absolutel lowest priority such that we don't
keep other kernel threads or userland from doing useful work.
Can be easily disabled by disabling the kthread_create(9) call in main().
Which perhaps we should do for non-MP kernels.
ok deraadt@, tedu@
|
|
cause a SIGSEGV or SIGBUS when a mapped file gets truncated. Access to
pages that are not backed by a file on such a mapping will be replaced by
zero-filled anonymous pages. Makes passing file descriptors of mapped files
usable without having to play tricks with signal handlers.
"steal your mmap flag" deraadt@
|
|
ok mpi@ kspillner@
|
|
yield() if the cpu is marked SHOULDYIELD.
ok miod@ tedu@ phessler@
|
|
have been seeing with tmpfs. Based on a similar fix from Bitrig by
Owain Ainsworth.
ok jsg@
|
|
|
|
the hint returned is over VM_MAXUSER_ADDRESS, apparently; better be safe for
now while this is investigated further.
|
|
|
|
Please spare some change for the mips64 memory-challenged machines..
Some change, Sir?
Fixes at least the octeon platform. Found the hardway on my DSR500.
Found by Boss tedu@ and Boss deraadt@
Okay Boss miod@
|
|
|
|
after discussions with beck deraadt kettenis.
|
|
it when we hibernate.
ok mlarkin@, miod@, deraadt@
|
|
on the 2nd of February 2011 in NetBSD.
http://marc.info/?l=netbsd-source-changes&m=129658899212732&w=2
http://marc.info/?l=netbsd-source-changes&m=129659095515558&w=2
http://marc.info/?l=netbsd-source-changes&m=129659157916514&w=2
http://marc.info/?l=netbsd-source-changes&m=129665962324372&w=2
http://marc.info/?l=netbsd-source-changes&m=129666033625342&w=2
http://marc.info/?l=netbsd-source-changes&m=129666052825545&w=2
http://marc.info/?l=netbsd-source-changes&m=129666922906480&w=2
http://marc.info/?l=netbsd-source-changes&m=129667725518082&w=2
|
|
don't need to be married.
ok guenther miod beck jsing kettenis
|
|
|
|
won't be pulling in the uvm side of the kitchen.
|
|
this inside #ifdef _KERNEL in any case, so nothing really changes.
|
|
|
|
|
|
wakeup and clearing the PG_BUSY and PG_WANTED flags, so try to keep those
bits as close together and defenitely avoid calling random code in between.
ok guenther@, tedu@
|
|
ok guenther
|
|
|
|
Move all legacy MAP_FOO values behind #ifndef _KERNEL and redefine
them to either be aliases for existing flags (e.g., MAP_COPY ->
MAP_PRIVATE) or 0.
Also, add MAP_OLDFOO defines (behind #ifndef _KERNEL) so the kernel
and kdump can remain compatible with current OpenBSD binaries.
ok deraadt
|
|
was empty then the first page allocation should sleep until it can get one.
ok tedu@
|
|
This provides a way for a process to designate pages in its address
space that should be replaced by fresh, zero-initialized anonymous
memory in forked child processes, rather than being copied or shared.
ok jmc, kettenis, tedu, deraadt; positive feedback from many more
|
|
|
|
to the process's vmspace and filedescs. struct proc continues to
keep copies of the pointers, copying them on fork, clearing them
on exit, and (for vmspace) refreshing on exec.
Also, make uvm_swapout_threads() thread aware, eliminating p_swtime
in kernel.
particular testing by ajacoutot@ and sebastia@
|
|
|
|
an offset/size/address by shifting by PAGE_SHIFT. Make uvm_objwrire/unwire
use voff_t instead of off_t. The former is the right type here even if it is
equivalent to the latter.
Inspired by a somewhat similar changes in Bitrig.
ok deraadt@, guenther@
|
|
pulled by <uvm/uvm_extern.h> and turn uvm_total() into a private
function.
The preferred way to get memory stats is through the VM_UVMEXP
sysctl(3) since VM_METER is just a wrapper on top of it. In the
kernel, use `uvmexp' directly instead of uvm_total().
This change does not remove <sys/vmmeter.h> from <uvm/uvm_extern.h>
to give some more time to port maintainers to fix their ports.
ok guenther@ as part of a larger diff.
|
|
|
|
uvm_uarea_alloc()
function name from NetBSD; arm testing by miod@
|
|
change. From Pedro Martelletto via bitrig.
ok beck@, krw@
|
|
a remove-and-insert-all-items approach for now and remove the comments that
suggest manipulating list pointers. Pointed out by Pedro Martelletto.
ok beck@, krw@, mikeb@
|
|
ok beck@, miod@
|
|
emphatic ok usual suspects, grudging ok miod
|
|
|
|
on the written buffers. Use the flag for writes from the page daemon to
ensure that we free buffers written out by the page daemon rather than
caching them.
ok kettenis@
|
|
in uvm_pmr_rootupdate(). Issue spotted and fix provided by Kieran Devlin.
|
|
reaper from hogging the cpu. it will do the kernel lock twiddle trick to
allow other CPUs a chance to run, and also checks if the reaper has been
running for an entire timeslice and should be preempted.
ok deraadt
|
|
which is the default, unless the fault call is explicitly used to wire a given
page.
The amount of pages being faulted in was borrowed from the FreeBSD VM code,
about 15 years ago, at a time FreeBSD was only reliably running on 4KB page
size systems.
It is questionable whether faulting the same amount of pages, on platforms
where the page size is larger, is a good idea, as it may cause too much I/O.
Add an uvmfault_init() routine, which will compute the proper number of pages
at runtime, depending upon the actual page size, and attempting to fault in
the same overall size the previous code would have done with 4KB pages.
ok tedu@
|
|
the pmap_update() to the end of the loop, rather than after each loop
iteration - which might not even end up invoking pmap_enter()!
Quiet blessing from guenther@ deraadt@
|
|
<uvm/uvm.h> if possible and remove double inclusions.
ok beck@, mlarkin@, deraadt@
|
|
Tweak the handling of ktrace EMUL when changing ktracing: only
generate one per process (not one per thread) and pass the correct
proc pointer down to the VFS layer. Permit generating of NAMI and
CSW records inside ktrace(2) itself.
ok deraadt@ millert@
|
|
PG_PMAPMASK as all the possible pmap-specific bits (similar to the other
PG_fooMASK) to make sure MI code does not need to be updated, the next time
more bits are allocated to greedy pmaps.
No functional change, soon to be used by the (greedy) mips64 pmap.
|
|
after analysis and testing. when flushing a large mmapped file, we can
eat up all the reserve bufs, but there's a good chance there will be more
clean ones available.
ok beck kettenis
|
|
|
|
|
|
<machine/pmap.h> where it belongs, and compensate in <uvm/uvm_extern.h>
by including <uvm/uvm_pmap.h> before <uvm/uvm_page.h>. Tested on all
MACHINE_ARCH but amd64 and i386 (and hppa64).
|