Age | Commit message (Collapse) | Author |
|
This prevents descriptors from being closed concurrently on receiver side.
ok bluhm@ claudio@
|
|
Make the cf_attach member of struct cfdata const and sprinkle a few
const into subr_autoconf.c to make this work. Fixes the compilation
of sys/dev/rd.c with newly const rd_ca.
ok miod (who had a similar diff)
|
|
The old comment only mentioned that tty_nmea was used for time, but
subsequently position data was added to this line discipline.
|
|
|
|
This fixes a problem where NOTE_EXIT could be received before
the process was officially a zombie and thus not immediately
waitable. OK deraadt@ visa@
|
|
vnode_hold_list and vnode_free_list aren't used outside kern/vfs_subr.c
move `struct freelst` where used in kern/vfs_subr.c
no intented behaviour changes. survived a release(8) build.
ok millert@
|
|
libcrypto can access this sysctl on arm64 without restrictions to determine
cpu features
ok deraadt@, kettenis@
|
|
for PCB tables. It does not break userland build anymore.
pf_socket_lookup() calls in_pcbhashlookup() in the PCB layer. To
run pf in parallel, make parts of the stack MP safe. Protect the
list and hashes in the PCB tables with a mutex.
Note that the protocol notify functions may call pf via tcp_output().
As the pf lock is a sleeping rw_lock, we must not hold a mutex. To
solve this for now, collect these PCBs in inp_notify list and protect
it with exclusive netlock.
OK sashan@
|
|
code similar in non DIAGNOSTIC case. Rename refcnt variable to
refs for consistency with r_refs. Add KASSERT() in refcnt_finalize().
OK visa@
|
|
OK bluhm@ dlg@
|
|
OK bluhm@
|
|
|
|
OK dlg@ bluhm@
|
|
refcnt_shared() checks whether the object has multiple references.
When refcnt_shared() returns zero, the caller is the only reference
holder.
refcnt_read() returns a snapshot of the counter value.
refcnt_shared() suggested by dlg@.
OK dlg@ mvs@
|
|
This reverts the commit protecting the list and hashes in the PCB tables
with a mutex since the build of sysctl(8) breaks, as found by kettenis.
ok sthen
|
|
run pf in parallel, make parts of the stack MP safe. Protect the
list and hashes in the PCB tables with a mutex.
Note that the protocol notify functions may call pf via tcp_output().
As the pf lock is a sleeping rw_lock, we must not hold a mutex. To
solve this for now, collect these PCBs in inp_notify list and protect
it with exclusive netlock.
OK sashan@
|
|
to the debugger can cause a loop between the debugger and cursig()
if the signal is masked. cursig() has no way to know which signal
was already delivered to the debugger and so it delivers the same
signal over and over again.
Instead handle traps to masked signals directly in trapsignal. This
is what rev 1.293 was mostly about. If SIGTRAP was masked by the
process breakpoints no longer worked since the signal deliver to
the debugger did not happen. Doing this case in trapsignal solves
both the problem with the loop and the delivery of masked traps.
Problem reported and fix tested by matthieu@
OK kettenis@ mpi@
|
|
variables. Although not necessary everywhere, using atomic functions
exclusively for variables marked as atomic is clearer.
OK mvs@ visa@
|
|
Revert the pr_usrreqs move: syzkaller found a NULL pointer deref
and I won't be available to monitor for followup issues for a bit
|
|
ok deraadt
|
|
then be shared among protosw structures, following the same basic
direction as NetBSD and FreeBSD for this.
Split PRU_CONTROL out of pr_usrreq into pru_control, giving it the
proper prototype to eliminate the previously necessary casts.
ok mvs@ bluhm@
|
|
|
|
'sockaddr' structure with socket's address. For key management and route
domain sockets it just returns error.
ok bluhm@
|
|
proper strings, adapt struct acct's ac_comm similarily. While here increase
ac_mem to 32-bits, increase ac_flag from 8 to 32 bits for future extensions,
add ac_pid for forensics, and reorder the structure to avoid compiler pads.
More work remains in the sa(8) command to use ac_pid better.
This is a flag day for the acct file format, new/old files/tools are incompatible.
ok bluhm millert
|
|
including the NUL), in all internal interafaces, and expose this
in ktrace, core, or proc.h visibility.
ok millert
|
|
net/if_pppx.c pointed out by jsg@
ok gnezdo@ deraadt@ jsg@ mpi@ millert@
|
|
|
|
|
|
|
|
cold=2. Use the same strategy in a a similar phase during hibernate.
|
|
phases where sleeps are not allowed, and this used to discover it.
msleep() needs the same check.
|
|
Ok deraadt@ guenther@
|
|
failure modes. Also, pack the code a little bit, easier to read.
|
|
OK mpi@
|
|
ok guenther@ rob@
|
|
and restart the suspend all over again. This was previously done by issuing
a task to the acpi thread, but this is simpler.
(I want to try to duplicate these tests earlier in the resume path...)
|
|
able to react to this suitably.
|
|
Ok deraat@
|
|
with AML parsing outside the acpi thread, the locking-release dance
around wsdisplay_{suspend,resume} can be removed
ok kettenis
|
|
reset the MD state before bailing out. New MD function sleep_abort()
does that.
|
|
ok deraadt@ guenther@
|
|
in sleep_resume(), which seems sensible for other future systems also
|
|
from cursig() to postsig() or the caller itself. This will simplify locking.
Also alter sigactsfree() a bit and move it into process_zap() so ps_sigacts
is always a valid pointer.
OK semarie@
|
|
previously sbchecklowmem() (and sonewconn()) would look at the mbuf
and mbuf cluster pools to see if they were approaching their hard
limits. based on how many mbufs/clusters were allocated against the
limits, socket operations would start to fail with ENOBUFS until
utilisation went down.
mbufs and clusters have changed a lot since then though. there are
now many mbuf cluster pools, not just one for 2k clusters. because
of this the mbuf layer now limits the amount of memory all the mbuf
pools can allocate backend pages from rather than limit the individual
pools. this means sbchecklowmem() ends up looking at the default
pool hard limit, which is UINT_MAX, which in turn means means
sbchecklowmem() probably never applies backpressure. this is made
worse on multiprocessor systems where per cpu caches of mbuf and
cluster pool items are enabled because the number of in use pool
items is distorted by the cpu caches.
this switches sbchecklowmem to looking at the page allocations made
by all the pools instead. the big benefit of this is that the page
allocations are much more representative of the overall mbuf memory
usage in the system. the downside is is that the backend page
allocation accounting does not see idle memory held by pools. pools
cannot release partially free pages to the page backend (obviously),
and pools cache idle items to avoid thrashing on the backend page
allocator. this means the page allocation level is higher than the
memory used by actual in-flight mbufs.
however, this can also be a benefit. the backend page allocation is a
kind of smoothed out "trend" line. mbuf utilisation over short periods
can be extremely bursty because of things like rx ring dequeue and fill
cycles, or large socket sends. if you're trying to grow socket
buffers while these things are happening, luck becomes an important
factor in whether it will work or not. because pools cache idle items,
the backend page utilisation better represents the overall trend
of activity in the system and will give more consistent behaviour here.
this diff is deliberately simple. we're basically going from "no
limits" to "some sort of limit" for sockets again, so keeping the
code simple means it should be easy to understand and tweak in the
future.
ok djm@ visa@ claudio@
|
|
ok kettenis
|
|
|
|
This avoids verb overlap with f_modify.
|
|
Use the f_event callback for checking event state within the pipe
event filters. This enables the same f_modify and f_process functions
to handle the different filter types.
OK anton@
|
|
OK mpi@
|
|
need to do this can do it a few moments later in a different hook
|