Age | Commit message (Collapse) | Author |
|
when we block due to hitting the TTYHOG limit. OK miod@
|
|
|
|
That is consistent to the SBLASTRECORDCHK and SBLASTMBUFCHK macros.
OK markus@
|
|
containing m_nextpkt chains.
OK markus@
|
|
malloc(), so that it can't exit and be freed if we sleep.
(another sparc.p nightmare test case)
ok beck@, phessler@
|
|
there are no buffers on the dirty queue to clean.
ok beck@
|
|
ok jsing@ krw@ mikeb@
|
|
as fix the case where buffers can be returned on the vinvalbuf path
and we do not get woken up when waiting for kva.
An earlier version looked at and ok'd by guenther@ in coimbra. - helpful
comments from kettenis@
|
|
be throwing away when growing the buffer cache - ok mlarkin@
|
|
dynamically, by comparing the stack pointer against the altstack
base and size, so that you get the correct answer if you longjmp
out of the signal handler, as tested by regress/sys/kern/stackjmp/.
Also, fix alt stack handling on vax, where it was completely broken.
Testing and corrections by miod@, krw@, tobiasu@, pirofti@
|
|
process), then don't decrement the total and per-user counts of processes.
ok deraadt@ miod@
|
|
everywhere instead of setting splbio.
ok krw@ pirofti@
|
|
This fixes a problem where we could sleep for kva and then our pointers
would not be valid on the next pass through the loop. We do this
by adding buf_acquire_nomap() - which can be used to busy up the buffer
without changing its mapped or unmapped state. We do not need to have
the buffer mapped to invalidate it, so it is sufficient to acquire it
for that. In the case where we write the buffer, we do map the buffer, and
potentially sleep.
|
|
A long time ago (in vienna) the reserves for the cleaner and syncer were
removed. softdep and many things have not performed ths same ever since.
Follow on generations of buffer cache hackers assumed the exising code
was the reference and have been in frustrating state of coprophagia ever
since.
This commit
0) Brings back a (small) reserve allotment of buffer pages, and the kva to
map them, to allow the cleaner and syncer to run even when under intense
memory or kva pressure.
1) Fixes a lot of comments and variables to represent reality.
2) Simplifies and corrects how the buffer cache backs off down to the lowest
level.
3) Corrects how the page daemons asks the buffer cache to back off, ensuring
that uvmpd_scan is done to recover inactive pages in low memory situaitons
4) Adds a high water mark to the pool used to allocate struct buf's
5) Correct the cleaner and the sleep/wakeup cases in both low memory and low
kva situations. (including accounting for the cleaner/syncer reserve)
Tested by many, with very much helpful input from deraadt, miod, tobiasu,
kettenis and others.
ok kettenis@ deraadt@ jj@
|
|
|
|
program could induce the kernel to panic by attempting to do a sempo
with nsops > kern.seminfo.semume and the SEM_UNFO flag set.
This fixes it so we return ENOSPC, like the man page says, rather
than panicing.
ok miod@, millert@
|
|
when there are more of them than size of queue waiting, and nothing else
going on.
ok miod@ kettenis@
|
|
wdog_shutdown() for external usage.
|
|
queue and calling soaccept(), so that the socket can't get torn down
by a TCP RST in the middle and trigger "panic: soaccept: !NOFDREF", as
seen by halex@
Analysis, original diff, and ok bluhm@
|
|
thread coredumps, the former thread needs to be released by the
later single_thread_set(SINGLE_EXIT) call, even though its P_WEXIT
flag is set.
ok kettenis@
|
|
- Whitespace KNF
- Removal/fixing of old useless comments
- Removal of unused counter
- Removal of pointless test that had no effect
ok krw@
|
|
get{sock,peer}name() behave like accept() when the involved UNIX-domain
socket isn't bound to an address, returning an AF_UNIX sockaddr
with zero-length sun_path. Based on diff from robert@ and mikeb@
ok robert@ deraadt@
|
|
|
|
n = 128.
Nscan is essentially, the disksort() style elevator algorithm for ordering
disk io operations. The difference is that we will re-order in chunks of
128 operations before continuing with the rest of the work. This avoids
the problem that the basic SCAN (aka elevator algorithm) has where continued
inserts can cause starvation, where requests can sit for a long time. This
solves problems where usb sticks could be unusable while long sequential
writes happened, and systems would become unresponsive while dumping core.
hacked upon (and this version largely rewritten by) tedu and myself.
Note, can be "backed out" by changing BUFQ_DEFAULT back to disksort in
buf.h
ok kettenis@, tedu@, krw@
|
|
This change ensures that writes in flight from the buffer cache via bufq
are limited to a high water mark - when the limit is reached the writes sleep
until the amount of IO in flight reaches a low water mark. This avoids the
problem where userland can queue an unlimited amount of asynchronous writes
resulting in the consumption of all/most of our available buffer mapping kva,
and a long queue of writes to the disk.
ok kettenis@, krw@
|
|
paths are reflexive. It is now possible to fail part-way through a
suspend sequence, and recover along the resume code path.
Split DVACT_SUSPEND by adding a new DVACT_POWERDOWN method is used
after hibernate (and suspend too) to finish the job. Some drivers
must be converted at the same time to use this instead of shutdown hooks
(the others will follow at a later time)
ok kettenis mlarkin
|
|
or blocking for each send(2) call.
diff from UMEZAWA Takeshi
ok bluhm
|
|
consistent when the effective gid isn't also a supplementary group.
ok beck@
|
|
boundary; uvm depends on this and will KASSERT this for its own safety.
Found the hard way, rounding direction discussed with ariane@ (I initially
wanted to round down, but it makes more sense to round up).
Of course noone in his right mind ought to run OMAGIC binaries (-:
|
|
are cleared as well; from hshoexer@, feedback and ok bluhm@, ok claudio@
|
|
in the release path. Especially accessing m in a KDASSERT() could
go wrong.
OK claudio@
|
|
There was a small race in sorwakeup() where that could happen if
we slept before the SB_SPLICE flag was set.
ok claudio@
|
|
conditions as in soreceive(). My goal is to make socket splicing
less protocol dependent.
ok claudio@
|
|
ok beck@
|
|
by the Go linker) as native executables even if they don't contain an
OpenBSD PT_NOTE segment.
Confirmed to fix Go by sthen
ok kettenis, deraadt
|
|
|
|
for all the compat layers which are now gone. Linux compat still works
because it always used another method in any case, and nothing looks at
p_os anymore.
ok jsing
|
|
- Avoid using copyinstr() without checking the return value.
- sys_mount() has already copied the path in, so pass this to the
filesystem mount code so that it does not have to copy it in again.
- Avoid copyinstr()/bzero() dance when we can simply bzero() and strlcpy().
ok krw@
|
|
|
|
|
|
|
|
exiting. At that point ps_single may point to a proc that's already freed.
Since there is no point in killing a process that's already exiting, just
skip this step.
ok guenther@
|
|
pointer array; we can access it directly.
ok guenther
|
|
because elem_count has an unsigned type (size_t).
Noted by Brad/Clang; no binary change on amd64 using GCC either.
|
|
executable and DSO (via crtbegin.c/crtbeginS.c). Not used yet, but
needed before GCC can start emitting -fstack-protector code that uses
them instead of __guard.
|
|
|
|
|
|
shared between processes.
ok djm@
|
|
This also makes sure we call cpu_unidle() on the correct cpu, since the
inlining order was wrong and could call it on the old cpu.
ok kettenis@
|
|
segments to the kernel, ld (2.15), and ld.so. Tested on alpha, amd64,
i386, macppc, and sparc64 (thanks naddy, mpi, and okan!).
Idea discussed for some time; committing now for further testing.
ok deraadt
|