Age | Commit message (Collapse) | Author |
|
This prevents from too small amaps being allocated by
forcing the allocation of a large number of slots.
Based on an analysis from Jesse Hertz and Tim Newsham.
ok kettenis@
|
|
memory if the file backing the mapping is truncated, we should check resource
limits. This prevents callers from triggering a kernel panic and a potential
integer overflow in the amap code by forcing the allocation of too many slots.
Based on an analysis from Jesse Hertz and Tim Newsham.
ok deraadt@
|
|
wrong.
|
|
memory if the file backing the mapping is truncated, we should check resource
limits. This prevents callers from triggering a kernel panic and a potential
integer overflow in the amap code by forcing the allocation of too many slots.
Based on an analysis from Jesse Hertz and Tim Newsham.
ok deraadt@
|
|
Uninitialized variables used in an if/else could cause a slower
codepath to be taken, but the end effect of both paths is the same.
Found by jsg@
|
|
- The number of slots must be initialized in the chunk of a small amap,
otherwise unmapping() part of a mmap()'d range would delay freeing
of vm_anons for small amaps
- If the first chunk of a bucket is freed, check if the next chunk in
the list has to become the new first chunk
- Use a separate loop for each type of traversal (small amap, by bucket
by list) in amap_wiperange(). This makes the code easier to follow and
also fixes a bug where too many chunks were wiped out when traversing
by list
However, the last two bugs should happen only when turning a previously
private mapping into a shared one, then forking, and then having
both processes unmap a part of the mapping.
snap and ports build tested by krw@, review by kettenis@
|
|
ok kettenis@ visa@
|
|
map, to avoid grabbing the kernel lock when pool_get() needs to allocate
a new pool page. Hopefully this really is the last case where we might grab
the kernel lock for interrupt-safe pools.
ok mpi@
|
|
|
|
a new kernel before this change, and ld.so updated)
|
|
Its primary use is to make guest VM memory accessible to the host
(e.g. vmd(8)). That will later allow us to remove the readpage and
writepage ioctls from vmm(4), and use ordinary loads and stores instead.
"looks good to me" kettenis@
|
|
from a filesystem with the wxallowed flag set. ok deraadt
Current status:
Filesystem Binary Action
---------- ------ ------
wxallowed normal violation -> log but don't abort
wxallowed wxneeded W^X silently allowed
normal normal violation -> abort
normal wxneeded process won't run at all
See http://www.openbsd.org/faq/current.html#r20160527
|
|
uvm_map_kmem_grow() gets called for submaps of the kernel_map on
architectures that don't implement pmap_growkernel(). When that happens
we get the infamous "address selector returned unavailable address" panic.
ok tedu@, mglocker@, beck@, stefan@
|
|
ok millert stefan
|
|
check ineffective when you already had more memory than your limit
allowed.
I noticed after writing this diff that millert@ already committed a fix
for this in rev. 1.74 (2009/06/01), but it got backed out with the giant
pmemrange backout two weeks later and was never restored.
OK tedu@ ("just fix it" and "go ahead with your version")
stefan@ also agrees that a check is needed.
|
|
ok deraadt@ matthew@ jca@
|
|
flag set by ld -zwxneeded. Such binaries are allowed to run only on wxallowed
mountpoints. They do not report mmap/mprotect problems.
Rate limit mmap/mprotect reports from other binaries.
These semantics are chosen to encourage progress in the ports ecosystem,
without overwhelming the developers who work in the area.
ok sthen kettenis
|
|
|
|
|
|
is generated, and mprotect/mmap return ENOTSUP. If the sysctl(8) flag
kern.wxabort is set then a SIGABRT occurs instead, for gdb use or coredump
creation.
W^X violating programs can be permitted on a ffs/nfs filesystem-basis,
using the "wxallowed" mount option. One day far in the future
upstream software developers will understand that W^X violations are a
tremendously risky practice and that style of programming will be
banished outright. Until then, we recommend most users need to use the
wxallowed option on their /usr/local filesystem. At least your other
filesystems don't permit such programs.
ok jca kettenis mlarkin natano
|
|
The original diff would crash at least i386 and powerpc, as spotted by
guenther@ The reason was an incorrect use of sizeof in amap_lookups().
Confirmation that powerpc works by mpi@ and mglocker@
"throw it in" deraadt@
Original commit message:
This is achieved by grouping amap slots into chunks that are allocated
on-demand by pool(9). Endless "fltamapcopy" loops because of kmem
shortage should be solved now. The kmem savings are also important to later
enable vmm(4) to use larged shared memory mappings for guest VM RAM.
This adapts libkvm also because the amap structure layout has changed.
Testing and fix of libkvm glitch in initial diff by tb@
Feedback and "time to get this in" kettenis@
|
|
|
|
This is achieved by grouping amap slots into chunks that are allocated
on-demand by pool(9). Endless "fltamapcopy" loops because of kmem
shortage should be solved now. The kmem savings are also important to later
enable vmm(4) to use larged shared memory mappings for guest VM RAM.
This adapts libkvm also because the amap structure layout has changed.
Testing and fix of libkvm glitch in initial diff by tb@
Feedback and "time to get this in" kettenis@
|
|
hppa reverse-stack gives us a valuable test case, but most developers don't
have a 2nd one to proceed further with this.
ok kettenis
|
|
It is supposed to control whether an amap should allocate memory
to store anon pointers lazily or upfront. Needed for upcoming amap
changes.
ok kettenis@
|
|
Only fail hard when running out of swap space also, as suggested by
kettenis@
While there, let amap_add() return a success status and handle
amap_add() errors in uvm_fault() similar to other out of RAM situations.
These bits are needed for further amap reorganization diffs.
lots of feedback and ok kettenis@
|
|
Found by David Hill with clang.
|
|
am_maxslot represents the total number of slots an amap can be extended
to. Since we do not extend amaps, this field as well as rounding the
number of slots to the next malloc bucket is not useful.
This also removes the corresponding output from procmap(1).
ok kettenis@
|
|
There's no need to insert marker elements to find the next item in the
amap list. The next amap can be determined by looking at the currently
examined amap.
Care must be taken to get the next element before the current amap is
possibly deleted, and after all the current amap's pages were read in
from swap (because the page-in may sleep and remove items from the amap
list).
|
|
This flag caused amaps to be allocated with additional spare slots, to
make extending them cheaper. However, the kernel never extends amaps,
so allocating spare slots is pointless. Also UVM_FLAG_AMAPPAD only
has an effect in combination with UVM_FLAG_OVERLAY. The only function
that used both flags was sys_obreak, but that function had the use of
UVM_FLAG_OVERLAY removed recently.
While there, kill the unused prototypes amap_flags and amap_refs.
They're defined as macros already.
ok mlarkin@ kettenis@ mpi@
|
|
Found by LLVM/Clang Static Analyzer.
ok mpi@ stefan@
|
|
In the code, this function is called when vm_map_entries are merged.
However, only kernel map entries are merged, and these do not use amaps.
Therefore amap_extend() is never called at runtime.
ok millert@, KASSERT suggestion and ok mpi@
|
|
torture tested on amd64, i386 and macppc
ok beck mpi stefan
"the change looks right" deraadt
|
|
The compiler is also smart enough to recognize that this is redundant.
The resulting code on amd64 is basically equivalent (slightly different
register allocation and instruction scheduling).
ok mpi@ deraadt@
|
|
Started by diff from Mical Mazurek.
|
|
This avoids wasting kernel memory if the user process does not make
use of the allocated memory.
Testing by sthen@ and tobiasu@, thanks!
ok deraadt@
|
|
This saves some memory compared to using malloc, because there's
no roundup to the next bucket size. And it reduces kmem pressure
at least for some architectures (e.g. amd64).
Testing by sthen@ and tobiasu@, thanks!
ok sthen@ deraadt@
|
|
|
|
ok mpi@ millert@
|
|
ok mpi@ visa@
|
|
When only one thread can access a map, there's no need
to lock it. Tweak the assertion instead of appeasing it
by acquiring a lock when it's not necessary.
ok kettenis@
|
|
Remove machdep.userldt sysctl.
Remove i386_[gs]et_ldt syscall stub from libi386.
Remove i386_[gs]et_ldt regression test.
ok mlarkin@ millert@ guenther@
|
|
from ray@, ok jmc@
|
|
|
|
map. This removes the (hopefully) last case in which pool_put() might try
to grab the kernel lock for interrupt-safe pools. Note that pools that are
created with the PR_WAITOK flag will still grab the kernel lock.
ok mpi@, tedu@
|
|
it relies upon the fpageq lock for data consistency and
sleep/wakeup interlocking.
Therefore, code which modifies page zeroing thread data
or performs a wakeup of the thread must also hold the
fpageq lock.
Fix an instance where this was not the case.
ok kettenis@
diff --git a/sys/uvm/uvm_pmemrange.c b/sys/uvm/uvm_pmemrange.c
|
|
ok tedu@
|
|
|
|
associated mistake in the uvm manpage.
Suggested by and ok tedu@
|
|
discussed with miod, deraadt, and guenther.
|