summaryrefslogtreecommitdiff
path: root/sys/uvm
AgeCommit message (Collapse)Author
2016-06-13In uvm_map(), call uvm_unmap_detach_intrsafe() if we have an interrupt-safeMark Kettenis
map, to avoid grabbing the kernel lock when pool_get() needs to allocate a new pool page. Hopefully this really is the last case where we might grab the kernel lock for interrupt-safe pools. ok mpi@
2016-06-08Dereference p_p once rather than 4 times.Theo de Raadt
2016-06-08hppa & mips64 now can do the full W^X check. (Make sure you haveTheo de Raadt
a new kernel before this change, and ld.so updated)
2016-06-05Add uvm_share() to share a memory range between two address spacesStefan Kempf
Its primary use is to make guest VM memory accessible to the host (e.g. vmd(8)). That will later allow us to remove the readpage and writepage ioctls from vmm(4), and use ordinary loads and stores instead. "looks good to me" kettenis@
2016-06-04If a process trips the W^X violation check, abort it unless it cameStuart Henderson
from a filesystem with the wxallowed flag set. ok deraadt Current status: Filesystem Binary Action ---------- ------ ------ wxallowed normal violation -> log but don't abort wxallowed wxneeded W^X silently allowed normal normal violation -> abort normal wxneeded process won't run at all See http://www.openbsd.org/faq/current.html#r20160527
2016-06-03We should never decrease uvm_maxkaddr. Currently this may happen ifMark Kettenis
uvm_map_kmem_grow() gets called for submaps of the kernel_map on architectures that don't implement pmap_growkernel(). When that happens we get the infamous "address selector returned unavailable address" panic. ok tedu@, mglocker@, beck@, stefan@
2016-06-02print the size when an unavailable address is returned. it is useful.Ted Unangst
ok millert stefan
2016-06-02Prevent vsize_t underflow when checking RLIMIT_DATA, which made theIngo Schwarze
check ineffective when you already had more memory than your limit allowed. I noticed after writing this diff that millert@ already committed a fix for this in rev. 1.74 (2009/06/01), but it got backed out with the giant pmemrange backout two weeks later and was never restored. OK tedu@ ("just fix it" and "go ahead with your version") stefan@ also agrees that a check is needed.
2016-06-01Delete the kernel compat bits for old mmap() MAP_OLD* flagsPhilip Guenther
ok deraadt@ matthew@ jca@
2016-05-30Identify W^X labelled binaries at execve() time based upon WX_OPENBSD_WXNEEDEDTheo de Raadt
flag set by ld -zwxneeded. Such binaries are allowed to run only on wxallowed mountpoints. They do not report mmap/mprotect problems. Rate limit mmap/mprotect reports from other binaries. These semantics are chosen to encourage progress in the ports ecosystem, without overwhelming the developers who work in the area. ok sthen kettenis
2016-05-30backout to insert correct commit messageTheo de Raadt
2016-05-30*** empty log message ***Theo de Raadt
2016-05-27W^X violations are no longer permitted by default. A kernel log messageTheo de Raadt
is generated, and mprotect/mmap return ENOTSUP. If the sysctl(8) flag kern.wxabort is set then a SIGABRT occurs instead, for gdb use or coredump creation. W^X violating programs can be permitted on a ffs/nfs filesystem-basis, using the "wxallowed" mount option. One day far in the future upstream software developers will understand that W^X violations are a tremendously risky practice and that style of programming will be banished outright. Until then, we recommend most users need to use the wxallowed option on their /usr/local filesystem. At least your other filesystems don't permit such programs. ok jca kettenis mlarkin natano
2016-05-26Make amaps use less kernel memory (2nd try)Stefan Kempf
The original diff would crash at least i386 and powerpc, as spotted by guenther@ The reason was an incorrect use of sizeof in amap_lookups(). Confirmation that powerpc works by mpi@ and mglocker@ "throw it in" deraadt@ Original commit message: This is achieved by grouping amap slots into chunks that are allocated on-demand by pool(9). Endless "fltamapcopy" loops because of kmem shortage should be solved now. The kmem savings are also important to later enable vmm(4) to use larged shared memory mappings for guest VM RAM. This adapts libkvm also because the amap structure layout has changed. Testing and fix of libkvm glitch in initial diff by tb@ Feedback and "time to get this in" kettenis@
2016-05-22Revert previous: breaks i386 and powerpc, probably all non-PMAP_DIRECT archsPhilip Guenther
2016-05-22Make amaps use less kernel memoryStefan Kempf
This is achieved by grouping amap slots into chunks that are allocated on-demand by pool(9). Endless "fltamapcopy" loops because of kmem shortage should be solved now. The kmem savings are also important to later enable vmm(4) to use larged shared memory mappings for guest VM RAM. This adapts libkvm also because the amap structure layout has changed. Testing and fix of libkvm glitch in initial diff by tb@ Feedback and "time to get this in" kettenis@
2016-05-11remove hppa64 port, which we never got going beyond broken single users.Theo de Raadt
hppa reverse-stack gives us a valuable test case, but most developers don't have a 2nd one to proceed further with this. ok kettenis
2016-05-08Additional parameter for amap_alloc().Stefan Kempf
It is supposed to control whether an amap should allocate memory to store anon pointers lazily or upfront. Needed for upcoming amap changes. ok kettenis@
2016-05-08Wait for RAM in uvm_fault when allocating uvm structures failsStefan Kempf
Only fail hard when running out of swap space also, as suggested by kettenis@ While there, let amap_add() return a success status and handle amap_add() errors in uvm_fault() similar to other out of RAM situations. These bits are needed for further amap reorganization diffs. lots of feedback and ok kettenis@
2016-05-05Remove uvm_mapentry_freecmp which has been unused for yearsStefan Kempf
Found by David Hill with clang.
2016-04-16Remove am_maxslot from amap.Stefan Kempf
am_maxslot represents the total number of slots an amap can be extended to. Since we do not extend amaps, this field as well as rounding the number of slots to the next malloc bucket is not useful. This also removes the corresponding output from procmap(1). ok kettenis@
2016-04-12Simplify amap traversal in amap_swap_off.Stefan Kempf
There's no need to insert marker elements to find the next item in the amap list. The next amap can be determined by looking at the currently examined amap. Care must be taken to get the next element before the current amap is possibly deleted, and after all the current amap's pages were read in from swap (because the page-in may sleep and remove items from the amap list).
2016-04-04UVM_FLAG_AMAPPAD has no effect anymore, nuke it.Stefan Kempf
This flag caused amaps to be allocated with additional spare slots, to make extending them cheaper. However, the kernel never extends amaps, so allocating spare slots is pointless. Also UVM_FLAG_AMAPPAD only has an effect in combination with UVM_FLAG_OVERLAY. The only function that used both flags was sys_obreak, but that function had the use of UVM_FLAG_OVERLAY removed recently. While there, kill the unused prototypes amap_flags and amap_refs. They're defined as macros already. ok mlarkin@ kettenis@ mpi@
2016-03-29Remove dead assignments and now unused variables.Charles Longeau
Found by LLVM/Clang Static Analyzer. ok mpi@ stefan@
2016-03-27amap_extend is never called, remove it.Stefan Kempf
In the code, this function is called when vm_map_entries are merged. However, only kernel map entries are merged, and these do not use amaps. Therefore amap_extend() is never called at runtime. ok millert@, KASSERT suggestion and ok mpi@
2016-03-19Remove the unused flags argument from VOP_UNLOCK().natano
torture tested on amd64, i386 and macppc ok beck mpi stefan "the change looks right" deraadt
2016-03-16Remove redundant check.Stefan Kempf
The compiler is also smart enough to recognize that this is redundant. The resulting code on amd64 is basically equivalent (slightly different register allocation and instruction scheduling). ok mpi@ deraadt@
2016-03-15'accomodate' -> 'accommodate' in comments.Kenneth R Westerback
Started by diff from Mical Mazurek.
2016-03-15Allocate amap slots for a virtual memory range reserved with sbrk lazily.Stefan Kempf
This avoids wasting kernel memory if the user process does not make use of the allocated memory. Testing by sthen@ and tobiasu@, thanks! ok deraadt@
2016-03-15For amaps with only a few slots, allocate the slots via pool(9)Stefan Kempf
This saves some memory compared to using malloc, because there's no roundup to the next bucket size. And it reduces kmem pressure at least for some architectures (e.g. amd64). Testing by sthen@ and tobiasu@, thanks! ok sthen@ deraadt@
2016-03-09remove vaxismsTheo de Raadt
2016-03-07Sync no-argument function declaration and definition by adding (void).Christian Weisgerber
ok mpi@ millert@
2016-03-06Remove unused amap_share_protect().Stefan Kempf
ok mpi@ visa@
2016-03-06Tweak uvm assertions to avoid locking in some cases.Stefan Kempf
When only one thread can access a map, there's no need to lock it. Tweak the assertion instead of appeasing it by acquiring a lock when it's not necessary. ok kettenis@
2016-03-03Remove option USER_LDT and everything depending on it.Christian Weisgerber
Remove machdep.userldt sysctl. Remove i386_[gs]et_ldt syscall stub from libi386. Remove i386_[gs]et_ldt regression test. ok mlarkin@ millert@ guenther@
2016-01-29Therefor -> Therefore (where appropriate)tb
from ray@, ok jmc@
2016-01-09Use uiomove(9) instead of uiomovei(9). From Martin Natano.Mark Kettenis
2015-12-16Avoid grabbing the kernel lock in uvm_unmap() if we have an interrupt-safeMark Kettenis
map. This removes the (hopefully) last case in which pool_put() might try to grab the kernel lock for interrupt-safe pools. Note that pools that are created with the PR_WAITOK flag will still grab the kernel lock. ok mpi@, tedu@
2015-12-06Since the page zeroing thread runs without the kernel lock,Bret Lambert
it relies upon the fpageq lock for data consistency and sleep/wakeup interlocking. Therefore, code which modifies page zeroing thread data or performs a wakeup of the thread must also hold the fpageq lock. Fix an instance where this was not the case. ok kettenis@ diff --git a/sys/uvm/uvm_pmemrange.c b/sys/uvm/uvm_pmemrange.c
2015-12-02remove declaration for nonexistant functionBret Lambert
ok tedu@
2015-11-14mutli -> multiMiod Vallat
2015-11-11Remove the superfluous typedef uvm_flag_t (unsigned int). Also, fix anmmcc
associated mistake in the uvm manpage. Suggested by and ok tedu@
2015-11-10UVM change needed for vmm.Mike Larkin
discussed with miod, deraadt, and guenther.
2015-11-01refactor pledge_*_check and pledge_fail functionsSebastien Marie
- rename _check function without suffix: a "pledge" function called from anywhere is a "check" function. - makes pledge_fail call the responsability to the _check function. remove it from caller. - make proper use of (potential) returned error of _check() functions. - adds pledge_kill() and pledge_protexec() with and OK deraadt@
2015-10-30Fix two (verified to be harmless) off-by-ones in bounds checks inMiod Vallat
uvm_page_init() (causing uvmexp.npages to be sligthly wrong if pmap_steal_memory() has been used) and uvm_page_physload(). ok guenther@ kettenis@ visa@ beck@
2015-10-23Add 3 new pledge requests. "ps" exposes enough sysctl information forTheo de Raadt
ps-style programs (there are quite a few in the tree, including tmux). "vminfo" exposes a bit more system operation information, which many observation programs want (such as top). settime allows setting the system time, and will be used to pledge-protect the last ntpd process.
2015-10-09Rename tame() to pledge(). This fairly interface has evolved to be moreTheo de Raadt
strict than anticipated. It allows a programmer to pledge/promise/covenant that their program will operate within an easily defined subset of the Unix environment, or it pays the price.
2015-10-08Lock the page queues by turning uvm_lock_pageq() and uvm_unlock_pageq() intoMark Kettenis
mtx_enter() and mtx_leave() operations. Not 100% this won't blow up but there is only one way to find out, and we need this to make progress on further unlocking uvm. prodded by deraadt@
2015-10-01In uvm_map_splitentry(), grab the kernel lock before calling into the amapMark Kettenis
or pager code. We may end up here without holding the kernel lock from uvm_unmap(). "ja ja" tedu@
2015-09-30implement new "prot_exec" tame(2) request:Sebastien Marie
- by default, a tamed-program don't have the possibility to use PROT_EXEC for mmap(2) or mprotect(2) - for that, use the request "prot_exec" (that could be dropped later) initial idea from deraadt@ and kettenis@ "make complete sense" beck@ ok deraadt@