summaryrefslogtreecommitdiff
path: root/sys/uvm
AgeCommit message (Collapse)Author
2018-08-20Preparations for arm64 radeondrm(4) support.Mark Kettenis
ok jsg@ (who pointed out the kern_pledge.c change was necessary as well)
2018-08-15Push back the kernel lock in sys_mmap(2) a little bit more now thatMark Kettenis
fd_getfile(9) is mpsafe. Note that sys_mmap(2) isn't actually unlocked currently. However this diff has been tested with it unlocked, and I hope to unlock it for real soon-ish. ok visa@, mpi@
2018-07-22In uvm_map_protect(), make sure we select a first map entry that ends afterMark Kettenis
the start of the range of pages that we're changing. Prevents a panic from a somewhat convoluted test case that anton@ came up with. ok guenther@, anton@
2018-07-16Insert the appropriate uvm_vnp_uncache(9) and uvm_vnp_setsize(9)helg
kernel calls to ensure that the UVM cache for memory mapped files is up to date. ok mpi@
2018-06-19Rename some unused fields in struct uvmexp toKenneth R Westerback
unusedNN. Missing man page bits pointed out by jmc@. Ports source scan by sthen@. ok deraadt@ guenther@
2018-05-16Avoid overflow in constraint computation; ok kettenis@ tb@Otto Moerbeek
2018-05-12Re-apply inadvertantly misplaced r1.127 from kettenis@:Kenneth R Westerback
"Buffer cache pages are wired but not counted as such. Therefore we have to set the wire count on the pages to 0 before we call uvm_pagefree() on them, just like we do in buf_free_pages(). Otherwise the wired pages counter goes negative. While there, also sprinkle some KASSERTs in there that buf_free_pages() has as well." ok beck@ (again)
2018-05-02Remove proc from the parameters of vn_lock(). The parameter isVisa Hankala
unnecessary because curproc always does the locking. OK mpi@
2018-04-28Clean up the parameters of VOP_LOCK() and VOP_UNLOCK(). It is alwaysVisa Hankala
curproc that does the locking or unlocking, so the proc parameter is pointless and can be dropped. OK mpi@, deraadt@
2018-04-27Move FREF() inside fd_getfile().Martin Pieuchot
ok visa@
2018-04-18Some programs create a PROT_NONE guard page at the far-end of the providedTheo de Raadt
stack buffer. With a page-aligned buffer, creating a MAP_STACK sub-region would undo the PROT_NONE guard. Ignore that last page. (We could check if the last page is non-RW before choosing to skip it. But we've already elected to grow STK sizes to compensate. Always ignoring the last page makes it a non-MAP_STACK guard page which can be opportunistically discovered) ok semarie stefan kettenis
2018-04-17- Make rnd hints avoid the brk area. The rnd allocator refuses to allocate inOtto Moerbeek
the brk area anyway. - Use a larger hint bound to spread the allocations more for the 32-bit case - Simplified the overy abstracted brs/stack allocator and switch of guard pages for the brk case. This allows i386 some extra space, depending on memory usage patterns. - Reduce brk area on i386 to give the rnd space more room ok stefan@ sthen@
2018-04-17Remove protection checks from uvm_map_is_stack_remappableStefan Kempf
Other parts of uvm/pmap check for proper prot flags already. This fixes the qemu startup problems that semarie@ reported on tech@.
2018-04-12Implement MAP_STACK option for mmap(). Synchronous faults (pagefault andTheo de Raadt
syscall) confirm the stack register points at MAP_STACK memory, otherwise SIGSEGV is delivered. sigaltstack() and pthread_attr_setstack() are modified to create a MAP_STACK sub-region which satisfies alignment requirements. Observe that MAP_STACK can only be set/cleared by mmap(), which zeroes the contents of the region -- there is no mprotect() equivalent operation, so there is no MAP_STACK-adding gadget. This opportunistic software-emulation of a stack protection bit makes stack-pivot operations during ROPchain fragile (kind of like removing a tool from the toolbox). original discussion with tedu, uvm work by stefan, testing by mortimer ok kettenis
2018-04-10Fix stop condition for linear search by taking into account the searchOtto Moerbeek
direction, otherwise we might break the loop prematurely; ok stefan@
2018-03-30Unlock the NET_LOCK() before calling vn_lock(9) to avoid a lock orderingMartin Pieuchot
issues with upcoming NFSnode's locks. ok visa@
2018-03-27Make sure that programs violating a pledge(2) promise or some memoryMartin Pieuchot
protection cannot block the final SIGABRT. While here apply the same logic to ddb(4)'s kill command. From semarie@, ok deraadt@
2018-03-08When we are rebooting, do not fail in uvn_io(). The vnodes areAlexander Bluhm
revoked while syncing disk, so the processes lose their executable pages. Instead of killing them with a SIGBUS after page fault, just sleep. This should prevent that init dies without pages followed by a kernel panic. initial diff from tedu@; OK deraadt@ tedu@
2018-02-19Remove almost unused `flags' argument of suser().Martin Pieuchot
The account flag `ASU' will no longer be set but that makes suser() mpsafe since it no longer mess with a per-process field. No objection from millert@, ok tedu@, bluhm@
2018-02-11Can mask MAP_STACK by name rather than numberTheo de Raadt
2018-01-18While booting it does not make sense to wait for memory, there isAlexander Bluhm
no other process which could free it. Better panic in malloc(9) or pool_get(9) instead of sleeping forever. tested by visa@ patrick@ Jan Klemkow suggested by kettenis@; OK deraadt@
2018-01-15mask out (ie. ignore) the bit which will be MAP_STACK in the future,Theo de Raadt
so diffs in snapshots can exercise the change in a less disruptive way idea with sthen, ok kettenis tom others
2018-01-02Stop assuming <sys/file.h> will pull in fcntl.h when _KERNEL is defined.Philip Guenther
ok millert@ sthen@
2017-12-30Don't pull in <sys/file.h> just to get fcntl.hPhilip Guenther
ok deraadt@ krw@
2017-11-30__MAP_NOFAULT doesn't make sense with anon mappings, so return EINVAL ifPhilip Guenther
that is attempted. Minor cleanups: - Eliminate some always false and always true tests against MAP_ANON - We treat anon mappings with neither MAP_{SHARED,PRIVATE} as MAP_PRIVATE so explicitly indicate that ok kettenis@ beck@
2017-08-12Use the NET_LOCK() macro instead of handrolling it.Martin Pieuchot
Tested by Hrvoje Popovski.
2017-08-12In the locking wrappers for &map->lock and &map->mtx, pass through file+linePhilip Guenther
when WITNESS is enabled ok visa@ kettenis@
2017-07-20Accessing a mmap(2)ed file behind its end should result in a SIGBUSAlexander Bluhm
according to POSIX. Bring regression test and kernel in line for amd64 and i386. Other architectures have to follow. OK deraadt@ kettenis@
2017-05-21Enable radeondrm(4) on loongson to get accelerated graphicsVisa Hankala
with the RS780E chipset. OK kettenis@, jsg@
2017-05-17Raise "uvm_map_entry_kmem_pool" IPL level to IPL_VM to prevent a deadlock.Martin Pieuchot
A deadlock can occur when the uvm_km_thread(), running without KERNEL_LOCK() is interrupted by and non-MPSAFE handler while holding the pool's mutex. At that moment if another CPU is holding the KERNEL_LOCK() and wants to grab the pool mutex, like in sys_kbind(), kaboom! This is a temporaty solution, a more generate approach regarding mutexes and un-KERNEL_LOCK()ed threads is beeing discussed. Deadlock reported by sthen@, ok kettenis@
2017-05-15Enable the NET_LOCK(), take 3.Martin Pieuchot
Recursions are still marked as XXXSMP. ok deraadt@, bluhm@
2017-05-11unbreak PMAP_DIRECT archs.David Gwynne
found by jmc@
2017-05-11reorder uvm init to avoid use before initialisation.David Gwynne
the particular use before init was in uvm_init step 6, which calls kmeminit to set up malloc(9), which calls uvm_km_zalloc, which calls pmap_enter, which calls pool_get, which tries to allocate a page using km_alloc, which isnt initalised until step 9 in uvm_init. uvm_km_page_init calls kthread_create though, which uses malloc internally, so it cant be reordered before malloc init. to cope with this, uvm_km_page_init is split up. it sets up the subsystem, and is called before kmeminit. the thread init is moved to uvm_km_page_lateinit, which is called after kmeminit in uvm_init.
2017-05-09Stop considering some sleeping threads are running.Martin Pieuchot
PZERO used to be a special value in the first BSD releases but since the introduction of tsleep(9) there's no way to tell if a thread is going to sleep for a "short" period of time. This remove the only (ab)use of ``p_priority'' outside the scheuler logic, which will help moving avway from a priority-based scheduler. ok visa@
2017-05-08Unifed PMAP_UAREA, unused since we stopped supporting ARM < v7.Martin Pieuchot
ok kettenis@
2017-05-03Mark uvm_sync_lock as vnode'ish for witness purposes, as it is takenPhilip Guenther
between mount locks and inode locks, which may been recorded in either order ok visa@
2017-04-30Unifdef KGDB.Martin Pieuchot
It doesn't compile und hasn't been working during the last decade. ok kettenis@, deraadt@
2017-04-20Tweak lock inits to make the system runnable with witness(4)Visa Hankala
on amd64 and i386.
2017-04-09Convert a malloc(9) to mallocarray(9)David Hill
ok deraadt@
2017-03-17Revert the NET_LOCK() and bring back pf's contention lock for release.Martin Pieuchot
For the moment the NET_LOCK() is always taken by threads running under KERNEL_LOCK(). That means it doesn't buy us anything except a possible deadlock that we did not spot. So make sure this doesn't happen, we'll have plenty of time in the next release cycle to stress test it. ok visa@
2017-03-09Don't take the vmmap lock when dumping core: it's not actually necessaryPhilip Guenther
and it creates a lock-order-reversal with inode locks ok stefan@
2017-03-05Handle unshared amaps in uvm_coredump_walkmap() such that untouched pagesPhilip Guenther
don't get written out to the core file but rather are represented via segments which have memory size greater than their file size. This shrinks core files and eliminates a case where core dumping fails with EFAULT. This can still happen in the shared amap case. Based on a problem report from (and testing by) semarie@ ok stefan@
2017-03-05Generating a coredump requires walking the map twice; changePhilip Guenther
uvm_coredump_walkmap() to do both with a callback in between so it can hold locks/change state across the two. ok stefan@
2017-02-14Convert most of the manual checks for CPU hogging to sched_pause().Martin Pieuchot
The distinction between preempt() and yield() stays as it is usueful to know if a thread decided to yield by itself or if the kernel told him to go away. ok tedu@, guenther@
2017-02-12Split up fork1():Philip Guenther
- FORK_THREAD handling is a totally separate function, thread_fork(), that is only used by sys___tfork() and which loses the flags, func, arg, and newprocp parameters and gains tcb parameter to guarantee the new thread's TCB is set before the creating thread returns - fork1() loses its stack and tidptr parameters Common bits factor out: - struct proc allocation and initialization moves to thread_new() - maxthread handling moves to fork_check_maxthread() - setting the new thread running moves to fork_thread_start() The MD cpu_fork() function swaps its unused stacksize parameter for a tcb parameter. luna88k testing by aoyama@, alpha testing by dlg@ ok mpi@
2017-02-05Update a comment that suggested the stack was executable. Nope!Philip Guenther
2017-02-05Delete comment obsoleted by the rewrite in rev 1.136 (2011-05-24)Philip Guenther
2017-02-02When dumping core, skip pages marked as unreadable instead of abortingPhilip Guenther
the dump. tracked down with help from semarie@ ok mpi@
2017-01-31Sprinkle some free sizes in uvm/David Hill
ok stefan@ visa@
2017-01-25Enable the NET_LOCK(), take 2.Martin Pieuchot
Recursions are currently known and marked a XXXSMP. Please report any assert to bugs@