Age | Commit message (Collapse) | Author |
|
|
|
Based on a diff by Cedric Tessier, nezetic at gmail dot com, thanks!
Discussed with and ok jsg@
|
|
of interating over the tree like a list. From Andriy Gapon in FreeBSD.
ok kettenis@
|
|
delete some stupid comments while there.
ok deraadt@ brad@
|
|
get multiple processes in the kernel these sets cant race and allow people
to set the default greater than the max.
|
|
in it cos its only called on new systems, when it actually does.
we dont care about old or new systems, just ours. the code is called, the
fact that it exists is enough to demonstrate that.
|
|
we refcount the bpf_d memory correctly so it cant go away. possibly worse
is the bpf minor id could be reused between the kq calls, so this seems
safer to me. also avoids a list walk on each op cos the ptr is just there.
|
|
large cluster pools and MCLGETI.
we could chain mbufs if we want to go even bigger.
with a fix from Mathieu- <naabed at poolp dot org>
|
|
end up waiting until the ring is full cos the timeout doesnt get set up
when the knote is registered.
|
|
CH382 has 2S/2S1P/1P configuration, the diff supports 2S/2S1P only
(2S1P works as 2S).
ok by deraadt@ and kettenis@
|
|
atomic_{add,sub}_{int,long}_nv. sys/atomic.h turns these into the
rest of the atomic api.
on uniprocessor hppa systems "atomic" operations are implemented
as a non-interruptable sequence by disabling all interrupts on the
cpu, doing the operation, and then restoring the interrupt mask.
this isnt enough on MP systems, so we added a global atomic memory
mutex that is taken inside the interrupt disabling above to coordinate
operations between cpus.
this is a lot of overhead though cos mutexes dance around with ipls,
which is unecessary in our case because of the interrupt disabling
that is already done. also, hppa spinlocks are implemented with
ldcw which requires the word it operates on to be 16 byte aligned.
mutexes arent guaranteed to have this alignment so they compensate
by having lots of words inside themselves so they can hit the
appropriate one to use for the ldcw op.
with this in mind, this change pulls __cpu_simple_locks, which are
simply ldcw spinlocks with a 16 byte aligned word, out of
src/sys/arch/hppa/include/lock.h into src/sys/arch/hppa/include/atomic.h
so atomic.h can use them. lock.h includes atomic.h, so it still
gets and provides the same functionality as before.
finally, this also pulls the rwlock cas implementation apart. cas
ops now share the same serialising lock on MP systems as the other
memory operations, and rw_cas is defined as a wrapper around
atomic_cas_uint.
ok kettenis@
|
|
ok miod@
|
|
read/write operations.
|
|
Pointed out by LLVM.
db_disasm.c:1018:13: error: format string is not a string literal (potentially insecure)
ok miod@
|
|
to or from it.
ok gcc
|
|
least). after this i am confident that pools are mpsafe, ie, can
be called without the kernel biglock being held.
the page allocation and setup code has been split into four parts:
pool_p_alloc is called without any locks held to ask the pool_allocator
backend to get a page and page header and set up the item list.
pool_p_insert is called with the pool lock held to insert the newly
minted page on the pools internal free page list and update its
internal accounting.
once the pool has finished with a page it calls the following:
pool_p_remove is called with the pool lock help to take the now
unnecessary page off the free page list and uncount it.
pool_p_free is called without the pool lock and does a bunch of
checks to verify that the items arent corrupted and have all been
returned to the page before giving it back to the pool_allocator
to be freed.
instead of pool_do_get doing all the work for pool_get, it is now
only responsible for doing a single item allocation. if for any
reason it cant get an item, it just returns NULL. pool_get is now
responsible for checking if the allocation is allowed (according
to hi watermarks etc), and for potentially sleeping waiting for
resources if required.
sleeping for resources is now built on top of pool_requests, which
are modelled on how the scsi midlayer schedules access to scsibus
resources.
the pool code now calls pool_allocator backends inside its own
calls to KERNEL_LOCK and KERNEL_UNLOCK, so users of pools dont
have to hold biglock to call pool_get or pool_put.
tested by krw@ (who found a SMALL_KERNEL issue, thank you)
noone objected
|
|
ok brad@
|
|
CMMU-to-CPU ratio case. This is necessary at least on the AV530 family.
|
|
Depending on DIAGNOSTICS, i82489_icr_wait() will either spin or panic in
this case. Therefore there is no need to check the flag again.
On virtualization, this saves one VMEXIT per IPI.
ok kettenis@
|
|
|
|
|
|
|
|
ok by henning@
|
|
indirectly including it via dev/ic/pckbcvar.h
Fixes kernel builds without ukbd(4) and pckbd(4).
From Atticus on tech@
|
|
|
|
ok krw@, jsg@
|
|
the inteldrm code. Fix this by adding new interfaces that can map a single
page without sleeping and use that in the execbuffer fast path that needs
this "atomic" behaviour. Should fix the panic I've seen under memory pressure
on i386.
|
|
ok mpi@, uebayasi@, dlg@
|
|
|
|
|
|
|
|
since modern POSIX specifies them. OK guenther@
|
|
a chance of working instead of returning EINVAL.
ok miod@
|
|
tested by matthieu@ on Sabre lite
|
|
|
|
|
|
the failure path which leaks all the stuff the previous code in
bpf_movein allocates.
since it's only called from bpfwrite, use M_WAIT instead to make
it reliable and just get rid of the bogus failure code.
ok miod@
|
|
ok dlg@
|
|
on all relevant device hierarchies in the appropriate order. For now this
means mpath(4) and mainbus(4), doing mpath(4) before mainbus(4) when
suspending or powering down and doing mpath(4) after mainbus(4) when
resuming such that mpath(4) can realy on the underlying hardware being
in a functional state.
Fixes problems with unflushed disk caches on machines where mpath(4) takes
control of some of your disks.
ok dlg@
|
|
(ST373405FSUN72G) respond to a START STOP UNIT command that spins down the
disk with a "Logical Unit Not Ready, Initialization Command Required".
Besides causing some dmesg spam, our sd(4) driver responds to such a response
by spinning the disk back up. Prevent this from happening by respecting
the SCSI_IGNORE_NOT_READY flag and using that flag when spinning down the
disk.
ok miod@
|
|
ok mpi@ henning@ krw@
|
|
|
|
OK guenther@
|
|
|
|
No functional change as pid_t is defined as int32_t.
OK miod@
|
|
|
|
of EINVAL like other sysctl things do.
|
|
some pool users (eg, mbufs and mbuf clusters) protect calls to pools
with their own locks that operate at high spl levels, rather than
pool_setipl() to have pools protect themselves.
this means pools mtx_enter doesnt necessarily prevent interrupts
that will use a pool, so we get code paths that try to mtx_enter
twice, which blows up.
reported by vlado at bsdbg dot net and matt bettinger
diagnosed by kettenis@
|
|
|
|
|