Age | Commit message (Collapse) | Author |
|
|
|
not "size of the storage of the pin elements"
|
|
In my recent commit I missed that sblock() may sleep while soreceive()
holds the incpb mutex. Call pru_lock() after sblock().
Reported-by: syzbot+f79c896ec019553655a0@syzkaller.appspotmail.com
Reported-by: syzbot+08b6f1102e429b2d4f84@syzkaller.appspotmail.com
OK mvs@
|
|
After if_detach() has called if_remove(), if_get() will return NULL.
Before if_detach() grabs the net lock, ARP timer can still run. In
this case arptfree() should just return, instead of triggering an
assertion because ifp is NULL. The ARP route will be deleted later
when in_ifdetach() calls in_purgeaddr().
OK kn@ mvs@ claudio@
|
|
For protocols that care about locking, use the shared net lock to
call sobind(). Use the per socket rwlock together with shared net
lock. This affects protocols UDP, raw IP, and divert. Move the
inpcb mutex locking into soreceive(), it is only used there. Add
a comment to describe the current inmplementation of inpcb locking.
OK mvs@ sashan@
|
|
Introduce `fd_lock' rwlock(9) and use it for `fd_fbufs_in' fuse buffers
queue and `fd_rklist' knotes list protection.
Tested by Rafael Sadowski.
Discussed with and ok from bluhm
|
|
Release netlock and take `sc_lock' rwlock(9) just in the beginning of
pflowioctl() and do corresponding operations in the end. Use `sc_lock'
to protect `sc_dying'.
We need to release netlock not only to keep locks order with `sc_lock'
rwlock(9), but also because pflowioctl() calls some operations like
socreate() or soclose() on udp(4) socket. Current implementation has
many relocking places which breaks atomicy, so merge them into one.
The `sc_lock' rwlock(9) is taken during all pflowioctl() call, so
`sc_dying' atomicy is not broken.
Not the ideal solution, but better then we have now.
Tested by Hrvoje Popovski.
Discussed with and ok from sashan
|
|
|
|
(not used yet, because the pinsyscall changes are still being worked on)
ok kettenis
|
|
Protect all remaining write access to inp_faddr and inp_laddr with
inpcb table mutex. Document inpcb locking for foreign and local
address and port and routing table id. Reading will be made MP
safe by adding per socket rw-locks in a next step.
OK sashan@ mvs@
|
|
implementation.
Set nkmempages to -1 by default instead of 0 so that the value ends up in
the data section. This way config(8) is able to alter the value as promised.
See also: https://github.com/llvm/llvm-project/issues/74632
OK miod@
|
|
OK miod@
|
|
The new logic is:
Up to 1G physmem use physical memory / 4,
above 1G add an extra 16MB per 1G of memory.
Clamp it down depending on available kernel virtual address space
- up and including 512M -> 64MB (macppc, arm, sh)
- between 512M and 1024M -> 128MB (hppa, i386, mips, luna88k)
- over 1024M clamping to VM_KERNEL_SPACE_SIZE / 4
The result is much more malloc(9) space on 64bit archs with lots of memory
and large kva space.
Note: amd64 only has 4G of kva and therefor nkmempages is limited to 262144
As a side-effect NKMEMPAGES_MAX and nkmempages_max are no longer used.
Tested and OK miod@
|
|
From Ville Syrjala
f38b4e99e24cbc45084552fe50273ed847a4f511 in linux-6.1.y/6.1.68
20c2dbff342aec13bf93c2f6c951da198916a455 in mainline linux
|
|
From Jani Nikula
d9ef7b05ccd7f4f0d12b7aed2a2e5858809dd4a4 in linux-6.1.y/6.1.68
f2f9c8cb6421429ef166d6404426693212d0ca07 in mainline linux
|
|
From Ville Syrjala
cf70d62ace9070fb8be900fa87cb2e43cbc1fa9f in linux-6.1.y/6.1.68
9dd56e979cb69f5cd904574c852b620777a2f69f in mainline linux
|
|
From Ankit Nautiyal
e6d55cf4939987eb1761cb0cbf47af233123da87 in linux-6.1.y/6.1.68
9d04eb20bc71a383b4d4e383b0b7fac8d38a2e34 in mainline linux
|
|
From Candice Li
c8bf22e0d0499de0692a91290f923029230a5bd4 in linux-6.1.y/6.1.68
e0409021e34af50e7b6f31635c8d21583d7c43dd in mainline linux
|
|
From Candice Li
87509778718cffdee6412f0d39713f883208a013 in linux-6.1.y/6.1.68
b81fde0dfe402e864ef1ac506eba756c89f1ad32 in mainline linux
|
|
From Alex Deucher
4ccb34d4313b81d6268b1e68bd9a4e7309f096f6 in linux-6.1.y/6.1.68
6246059a19d4cd32ef1af42a6ab016b779cd68c4 in mainline linux
|
|
From Luben Tuikov
af6b1f1156fc2d886251a076b87243597301437c in linux-6.1.y/6.1.68
8782007b5f5795f118c5167f46d8c8142abcc92f in mainline linux
|
|
From Luben Tuikov
a3049c9a30131639f056a2b3db934c70ff91068a in linux-6.1.y/6.1.68
1bb745d7596d2b368fd9afb90473f3581495e39d in mainline linux
|
|
From Luben Tuikov
30289057ef8f8accd98ee41221c859a471f20c5c in linux-6.1.y/6.1.68
64a3dbb06ad88d89a0958ccafc4f01611657f641 in mainline linux
|
|
From Luben Tuikov
c67c553b4dd9a315919ae8990da367523fad0e38 in linux-6.1.y/6.1.68
3b8164f8084ff7888ed24970efa230ff5d36eda8 in mainline linux
|
|
From Luben Tuikov
ee9efcdc76af0dcb51579aa61c5019eabce93d73 in linux-6.1.y/6.1.68
da858deab88eb561f2196bc99b6dbd2320e56456 in mainline linux
|
|
From Candice Li
a945568638acfc7d2d95de520849857506b21252 in linux-6.1.y/6.1.68
c9bdc6c3cf39df6db9c611d05fc512b1276b1cc8 in mainline linux
|
|
From Candice Li
f549f837b9aca23983540fc6498e19eee8b3073a in linux-6.1.y/6.1.68
bc22f8ec464af9e14263c3ed6a1c2be86618c804 in mainline linux
|
|
From Prike Liang
458affed061935948d31f5d731bbcfbff3158762 in linux-6.1.y/6.1.68
c6df7f313794c3ad41a49b9a7c95da369db607f3 in mainline linux
|
|
From Srinivasan Shanmugam
41c5dd545e765bf4677a211d3c68808d7069e4a1 in linux-6.1.y/6.1.68
93125cb704919f572c01e02ef64923caff1c3164 in mainline linux
|
|
From Tim Huang
613eaee4459dfdae02f48cd02231cc177e9c37e7 in linux-6.1.y/6.1.68
6b0b7789a7a5f3e69185449f891beea58e563f9b in mainline linux
|
|
From YuanShang
9046665befd6e9b9b97df458dc4c41cfe63e21d3 in linux-6.1.y/6.1.68
50d51374b498457c4dea26779d32ccfed12ddaff in mainline linux
|
|
descriptor (pted) pool in the [riscv64] pmap implementation. This
significantly reduces the side-effects of lock contention on the kernel
map lock that is (incorrectly) translated into excessive page daemon
wakeups. This is not a perfect solution but it does lead to significant
speedups [on the Hifive Unmatched]
Improvement and commit message adapted from kettenis' rev 1.110 commit
to arm64/pmap.c. ok phessler@ kettenis@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
it is a dangerous alternative entry point for all system calls, and thus
incompatible with the precision system call entry point scheme we are
heading towards. This has been a 3-year mission:
First perl needed a code-generated wrapper to fake syscall(2) as a giant
switch table, then all the ports were cleaned with relatively minor fixes,
except for "go". "go" required two fixes -- 1) a framework issue with
old library versions, and 2) like perl, a fake syscall(2) wrapper to
handle ioctl(2) and sysctl(2) because "syscall(SYS_ioctl" occurs all over
the place in the "go" ecosystem because the "go developers" are plan9-loving
unix-hating folk who tried to build an ecosystem without allowing "ioctl".
ok kettenis, jsing, afresh1, sthen
|
|
|
|
doesn't make sense anymore. It is better to just issue an illegal
instruction.
ok kettenis, with some misgivings about inconsistant approaches between
architectures.
In the future we could change sigreturn(2) to never return an exit code,
but always just terminate the process. We stopped this system call
from being callable ages ago with msyscall(2), and there is no stub for
it in libc.. maybe that's the next step to take?
|
|
mpsafe.
The weird interactions around `pflow_flows' and `sc_gcounter' replaced
by simple `pflow_flows' increment. Since the flow sequence is the 32
bits integer, the `sc_gcounter' type replaced by the type of uint32_t.
ok bluhm sashan
|
|
descriptor (pted) pool in the arm64 pmap implementation. This
significantly reduces the side-effects of lock contention on the kernel
map lock that is (incorrectly) translated into excessive page daemon
wakeups. This is not a perfect solution but it does lead to significant
speedups on machines with many CPU cores.
This requires adding a new pmap_init_percpu() function that gets called
at the point where kernel is ready to set up the per-CPU pool caches.
Dummy implementations of this function are added for all non-arm64
architectures. Some other architectures can probably benefit from
providing an actual implementation that sets up per-CPU caches for
pmap pools as well.
ok phessler@, claudio@, miod@, patrick@
|
|
Fix the mask of shifted 8 bit field from 0x7f to 0xff.
Allows proper decoding of status fields SCT and SC.
From mlelstv@netbsd via NetBSD.
ok miod@
|
|
Since the revision 1.1182 of net/pf.c netlock is not taken while
export_pflow() called from pf_purge_states(). Current locks order
requires netlock to be taken before PF_LOCK(), so there is no reason
to turn it back into this path only for optional export_pflow() call.
The `pflowif_list' foreach loop has no context switch within, so SMR
list is better than mutex(9).
Tested by Hrvoje Popovski.
ok sashan bluhm
|
|
This adds per core energy sensors (in Joules) and one per SoC temparature
sensor.
OK kettenis@ deraadt@
|
|
ok sf@
|
|
From Ilya Bakoulin
10ce6301009fa46ba264ed75b822115ec3ca6e67 in linux-6.1.y/6.1.66
6f395cebdd8927fbffdc3a55a14fcacf93634359 in mainline linux
|
|
From Harry Wentland
8332cb6c63394f32117a6f46a8cf7bedb8eec0b1 in linux-6.1.y/6.1.66
27fc10d1095f7a7de7c917638d7134033a190dd8 in mainline linux
|