Age | Commit message (Collapse) | Author |
|
Input and ok millert@
|
|
sure it will return an address within that range.
Use this in uaddr_rnd_select() to make sure we will not attempt to pick
an address beyond what we are allowed to map.
In my trees for 9 months, blackmailed s2k15 attendees into agreeing now would
be a good time to commit.
|
|
define in sys/limits.h. OK guenther@
|
|
are not necessary because the caller already ensures these. the tail
section for handing mlock can be shared as well.
ok beck guenther
|
|
for everything else.
-Adapt the anon version to be callable without the biglock held.
Done by tedu@, kettenis@ and me.. pounded on a bunch.
This does not yet make mmap a NOLOCK call, but permits it to be so.
ok tedu@, kettenis@, guenther@ jsing@
|
|
doesn't have all the values and therefore can't be used everywhere.
ok deraadt@ kettenis@
|
|
objective: vnode.h doesn't include uvm_extern.h anymore.
followup changes: include uvm_extern.h or lock.h where necessary.
ok and help from deraadt
|
|
eliminating the must-be-kept-in-sync UVM_INH_* macros
ok deraadt@ tedu@
|
|
ok deraadt@ tedu@
|
|
PROT_NONE, PROT_READ, PROT_WRITE, and PROT_EXEC from mman.h.
PROT_MASK is introduced as the one true way of extracting those bits.
Remove UVM_ADV_* wrapper, using the standard names.
ok doug guenther kettenis
|
|
cause a SIGSEGV or SIGBUS when a mapped file gets truncated. Access to
pages that are not backed by a file on such a mapping will be replaced by
zero-filled anonymous pages. Makes passing file descriptors of mapped files
usable without having to play tricks with signal handlers.
"steal your mmap flag" deraadt@
|
|
after discussions with beck deraadt kettenis.
|
|
|
|
ok guenther
|
|
Move all legacy MAP_FOO values behind #ifndef _KERNEL and redefine
them to either be aliases for existing flags (e.g., MAP_COPY ->
MAP_PRIVATE) or 0.
Also, add MAP_OLDFOO defines (behind #ifndef _KERNEL) so the kernel
and kdump can remain compatible with current OpenBSD binaries.
ok deraadt
|
|
emphatic ok usual suspects, grudging ok miod
|
|
|
|
|
|
indicate that the kernel should fail with MAP_FAILED if the specified
address is not currently available instead of unmapping it.
Change ld.so on i386 to make use of __MAP_NOREMAP to improve
reliability.
__MAP_NOREMAP diff by guenther based on an earlier diff by Ariane;
ld.so bits by guenther and me
bulk build stress testing of earlier diffs by sthen
ok deraadt; committing now for further testing
|
|
anticipation of further changes to closef(). No binary change.
ok krw@ miod@ deraadt@
|
|
- Posix rules that a 0-byte mmap must return EINVAL
- our allocators are unable to distinguish between free memory and
0 bytes of allocated memory
|
|
no oks (it is really a pain to review properly)
extensively tested, I'm confident it'll be stable
'now is the time' from several icb inhabitants
Diff provides:
- ability to specify different allocators for different regions/maps
- a simpler implementation of the current allocator
- currently in compatibility mode: it will generate similar addresses
as the old allocator
|
|
sys_osigaltstack() is 7 years old and no longer needed; all glory to
the sys_sigaltstack()!
sys_ogetdirentries() is about 9 months old, but still acceptable
within our release cycle; move from STD to COMPAT_48 to make this
clearer for tedu@ next year.
sys_sbrk() and sys_sstk() are completely obsolete: all they do is
return ENOSYS.
ok guenther@
|
|
msync(size == 0) did strange things based on the original mapping
segments and trying to manipulate same. This code was copied from the
original vm when we moved to uvm.
posix says nothing about this behaviour and anything that depends on it is
systemically broken, so rip it out and make sys_msync about 30% smaller.
ok deraadt@, tedu@, guenther@.
|
|
|
|
outside the tree.
|
|
vmmap is designed to perform address space randomized allocations,
without letting fragmentation of the address space go through the roof.
Some highlights:
- kernel address space randomization
- proper implementation of guardpages
- roughly 10% system time reduction during kernel build
Tested by alot of people on tech@ and developers.
Theo's machines are still happy.
|
|
resort if mmap fails otherwise to enable more complete address space
utilization. tested for a while with no ill effects.
|
|
heap gap from max data size. nothing else changes yet. ok deraadt
|
|
with a spinlock (even vslocked() buffers may fault in the right
(complicated) situation).
We solve this by preallocating a bounded array for the response and copying the
data out when all locks have been released.
ok thib@, beck@
|
|
whether removing holes or parts of them is allowed or not.
Only allow hole removal in uvmspace_free(), when tearing the vmspace down.
ok art@
|
|
This has has been tested very very thoroughly on all archs we have
excepting 88k and 68k. Please see cvs log for the individual commit
messages.
ok beck@, thib@
|
|
which is exactly what the macro does.
Macro's that are nothing more then:
#define FUNCTION(arg) function(arg)
are almost always pointless and should go away.
OK blambert@
Agreed by many.
|
|
More backouts in line with previous ones, this appears to bring us back to a
stable condition.
A machine forced to 64mb of ram cycled 10GB through swap with this diff
and is still running as I type this. Other tests by ariane@ and thib@
also seem to show that it's alright.
ok deraadt@, thib@, ariane@
|
|
separately).
a change at or just before the hackathon has either exposed or added a
very very nasty memory corruption bug that is giving us hell right now.
So in the interest of kernel stability these diffs are being backed out
until such a time as that corruption bug has been found and squashed,
then the ones that are proven good may slowly return.
a quick hitlist of the main commits this backs out:
mine:
uvm_objwire
the lock change in uvm_swap.c
using trees for uvm objects instead of the hash
removing the pgo_releasepg callback.
art@'s:
putting pmap_page_protect(VM_PROT_NONE) in uvm_pagedeactivate() since
all callers called that just prior anyway.
ok beck@, ariane@.
prompted by deraadt@.
|
|
OK guenther@ otto@
|
|
pgo_releasepg() hook and just free the page the "normal" way in the one
place we'll ever see PG_RELEASED and should care (uvm_page_unbusy,
called in aiodoned).
ok art@, beck@, thib@
|
|
K&R function declarations, so switch them all over to ansi-style, in
accordance with the prophesy.
"go for it" art@
|
|
|
|
version for i386
more architectures and ctob() replacement is being worked on
prodded by and ok miod
|
|
ckuethe@ for a while. Okay beck@, "it is good timing" deraadt@.
|
|
simple_{lock/unlock}.
ok art@
|
|
is proper errnos.
millert@ ok and some help
|
|
ok otto@
|
|
eyeballed by miod@ and pedro@
|
|
by factoring most of the checks into a macro. OK otto@
|
|
us did not see it or get a chance to test it before it was commited. It
broke cvs, in the ami driver, making it not succeed at seeing it's devices.
|
|
proper; found by krause and mmap_fixed
|
|
this results in lesse kva waste due to static preallocation of those
for every phys page and also every swap page.
tested by beck krw miod
|
|
call uvm_unmap_p instead of uvm_unmap so that it has the process information
and can adjust vm_dused. okay pedro@ tedu@
|