Age | Commit message (Collapse) | Author |
|
initial thread
ok jsing@ kettenis@
|
|
powerpc: rename second argument of pmap_proc_iflush() to match other archs
ok kettenis@
|
|
This was caused by an integer overflow in a loop. mlarkin@
noticed the hang when trying to run a vmm(4) guest with lots of RAM.
|
|
ok mpi@ mikeb@
|
|
ok guenther
|
|
|
|
|
|
|
|
|
|
vm_page structs go into three trees, uvm_objtree, uvm_pmr_addr, and
uvm_pmr_size. all these have been moved to RBT code.
this should give us a decent chunk of code space back.
|
|
|
|
this tree is interesting because it uses all the red black tree
features, specifically the augment callback thats called on tree
topology changes, and it poisons and checks entries as theyre removed
from and inserted back into the tree respectively.
ok stefan@
|
|
the ioff argument to pool_init() is unused and has been for many
years, so this replaces it with an ipl argument. because the ipl
will be set on init we no longer need pool_setipl.
most of these changes have been done with coccinelle using the spatch
below. cocci sucks at formatting code though, so i fixed that by hand.
the manpage and subr_pool.c bits i did myself.
ok tedu@ jmatthew@
@ipl@
expression pp;
expression ipl;
expression s, a, o, f, m, p;
@@
-pool_init(pp, s, a, o, f, m, p);
-pool_setipl(pp, ipl);
+pool_init(pp, s, a, ipl, f, m, p);
|
|
Checking whether a memory range could be mprotect()'ed to PROT_EXEC
attempts to put every mapping into the uaddr_exe range, if it exists.
This would fill up the exe range on i386 quickly, once uaddr_exe gets
used. So only use uaddr_exe if we know PROT_EXEC is needed for sure
No change in current behavior, since uaddr_exe will only be used
with uvm pivots.
ok tedu@
|
|
Fixes uvm pivots bug that would create non-page aligned addresses.
This fix is in code that's not yet enabled.
|
|
min is already clamped before invoking these functions.
ok kettenis@
|
|
The new semantics are W^X violations are reported to the application
via ENOTSUP. Forgot to fix this during the last change.
Spotted by kettenis
|
|
This fixes coredumps of processes that use relro to make part of their
writable address space read-only.
ok guenther@
|
|
free static entries are kept in a simple linked list, so use SLIST
to make this obvious. the RB_PARENT manipulations are ugly and
confusing.
ok kettenis@
|
|
"wxallowed" filesystems. mmap(2) & mprotect(2) now return ENOTSUP.
(To diagnose buggy programs, consider using sysctl kern.wxabort=1 and
looking at the coredumps)
ok kettenis tedu naddy
|
|
to prevent hitting assertions and/or corrupting data structures during that
phase.
ok deraadt@, tedu@
|
|
size of an address range.
ok deraadt@, tedu@
|
|
callers should probably check too, but checking here won't hurt.
possible panic reported by tim newsham.
ok kettenis
|
|
another flag in at some point. ok stefan
|
|
This prevents from too small amaps being allocated by
forcing the allocation of a large number of slots.
Based on an analysis from Jesse Hertz and Tim Newsham.
ok kettenis@
|
|
memory if the file backing the mapping is truncated, we should check resource
limits. This prevents callers from triggering a kernel panic and a potential
integer overflow in the amap code by forcing the allocation of too many slots.
Based on an analysis from Jesse Hertz and Tim Newsham.
ok deraadt@
|
|
wrong.
|
|
memory if the file backing the mapping is truncated, we should check resource
limits. This prevents callers from triggering a kernel panic and a potential
integer overflow in the amap code by forcing the allocation of too many slots.
Based on an analysis from Jesse Hertz and Tim Newsham.
ok deraadt@
|
|
Uninitialized variables used in an if/else could cause a slower
codepath to be taken, but the end effect of both paths is the same.
Found by jsg@
|
|
- The number of slots must be initialized in the chunk of a small amap,
otherwise unmapping() part of a mmap()'d range would delay freeing
of vm_anons for small amaps
- If the first chunk of a bucket is freed, check if the next chunk in
the list has to become the new first chunk
- Use a separate loop for each type of traversal (small amap, by bucket
by list) in amap_wiperange(). This makes the code easier to follow and
also fixes a bug where too many chunks were wiped out when traversing
by list
However, the last two bugs should happen only when turning a previously
private mapping into a shared one, then forking, and then having
both processes unmap a part of the mapping.
snap and ports build tested by krw@, review by kettenis@
|
|
ok kettenis@ visa@
|
|
map, to avoid grabbing the kernel lock when pool_get() needs to allocate
a new pool page. Hopefully this really is the last case where we might grab
the kernel lock for interrupt-safe pools.
ok mpi@
|
|
|
|
a new kernel before this change, and ld.so updated)
|
|
Its primary use is to make guest VM memory accessible to the host
(e.g. vmd(8)). That will later allow us to remove the readpage and
writepage ioctls from vmm(4), and use ordinary loads and stores instead.
"looks good to me" kettenis@
|
|
from a filesystem with the wxallowed flag set. ok deraadt
Current status:
Filesystem Binary Action
---------- ------ ------
wxallowed normal violation -> log but don't abort
wxallowed wxneeded W^X silently allowed
normal normal violation -> abort
normal wxneeded process won't run at all
See http://www.openbsd.org/faq/current.html#r20160527
|
|
uvm_map_kmem_grow() gets called for submaps of the kernel_map on
architectures that don't implement pmap_growkernel(). When that happens
we get the infamous "address selector returned unavailable address" panic.
ok tedu@, mglocker@, beck@, stefan@
|
|
ok millert stefan
|
|
check ineffective when you already had more memory than your limit
allowed.
I noticed after writing this diff that millert@ already committed a fix
for this in rev. 1.74 (2009/06/01), but it got backed out with the giant
pmemrange backout two weeks later and was never restored.
OK tedu@ ("just fix it" and "go ahead with your version")
stefan@ also agrees that a check is needed.
|
|
ok deraadt@ matthew@ jca@
|
|
flag set by ld -zwxneeded. Such binaries are allowed to run only on wxallowed
mountpoints. They do not report mmap/mprotect problems.
Rate limit mmap/mprotect reports from other binaries.
These semantics are chosen to encourage progress in the ports ecosystem,
without overwhelming the developers who work in the area.
ok sthen kettenis
|
|
|
|
|
|
is generated, and mprotect/mmap return ENOTSUP. If the sysctl(8) flag
kern.wxabort is set then a SIGABRT occurs instead, for gdb use or coredump
creation.
W^X violating programs can be permitted on a ffs/nfs filesystem-basis,
using the "wxallowed" mount option. One day far in the future
upstream software developers will understand that W^X violations are a
tremendously risky practice and that style of programming will be
banished outright. Until then, we recommend most users need to use the
wxallowed option on their /usr/local filesystem. At least your other
filesystems don't permit such programs.
ok jca kettenis mlarkin natano
|
|
The original diff would crash at least i386 and powerpc, as spotted by
guenther@ The reason was an incorrect use of sizeof in amap_lookups().
Confirmation that powerpc works by mpi@ and mglocker@
"throw it in" deraadt@
Original commit message:
This is achieved by grouping amap slots into chunks that are allocated
on-demand by pool(9). Endless "fltamapcopy" loops because of kmem
shortage should be solved now. The kmem savings are also important to later
enable vmm(4) to use larged shared memory mappings for guest VM RAM.
This adapts libkvm also because the amap structure layout has changed.
Testing and fix of libkvm glitch in initial diff by tb@
Feedback and "time to get this in" kettenis@
|
|
|
|
This is achieved by grouping amap slots into chunks that are allocated
on-demand by pool(9). Endless "fltamapcopy" loops because of kmem
shortage should be solved now. The kmem savings are also important to later
enable vmm(4) to use larged shared memory mappings for guest VM RAM.
This adapts libkvm also because the amap structure layout has changed.
Testing and fix of libkvm glitch in initial diff by tb@
Feedback and "time to get this in" kettenis@
|
|
hppa reverse-stack gives us a valuable test case, but most developers don't
have a 2nd one to proceed further with this.
ok kettenis
|
|
It is supposed to control whether an amap should allocate memory
to store anon pointers lazily or upfront. Needed for upcoming amap
changes.
ok kettenis@
|
|
Only fail hard when running out of swap space also, as suggested by
kettenis@
While there, let amap_add() return a success status and handle
amap_add() errors in uvm_fault() similar to other out of RAM situations.
These bits are needed for further amap reorganization diffs.
lots of feedback and ok kettenis@
|