Age | Commit message (Collapse) | Author |
|
to the debugger can cause a loop between the debugger and cursig()
if the signal is masked. cursig() has no way to know which signal
was already delivered to the debugger and so it delivers the same
signal over and over again.
Instead handle traps to masked signals directly in trapsignal. This
is what rev 1.293 was mostly about. If SIGTRAP was masked by the
process breakpoints no longer worked since the signal deliver to
the debugger did not happen. Doing this case in trapsignal solves
both the problem with the loop and the delivery of masked traps.
Problem reported and fix tested by matthieu@
OK kettenis@ mpi@
|
|
from cursig() to postsig() or the caller itself. This will simplify locking.
Also alter sigactsfree() a bit and move it into process_zap() so ps_sigacts
is always a valid pointer.
OK semarie@
|
|
the parent of ptraced processes. Especially ignore the signal mask set
by sigprocmask(2) in that case. In userret() alter the testcase for
when to call cursig() which is only there to avoid taking the
KERNEL_LOCK when returning from a MP safe syscall. This can be revisited
once cursig() is MP safe.
Problem with debugging signal handlers found by kurt@
Tested and OK kurt@, OK mpi@
|
|
|
|
exec_elf_fixup() and coredump_elf() in <sys/exec_elf.h> and call
them and the MD setregs() directly in kern_exec.c and kern_sig.c
Also delete e_name[] (only used by sysctl), e_errno (unused), and
e_syscallnames[] (only used by SYSCALL_DEBUG) and constipate
syscallnames to 'const char *const[]'
ok kettenis@
|
|
function. This will make unlocking cursig() & postsig() a bit easier.
OK mpi@
|
|
to get a better order of functions. Also reduce the size of sigprop
to NSIG from NSIG+1. NSIG is defined as 33 and so includes the extra
element for this array.
OK mpi@
|
|
sigsuspend(2) only returns upon delivery of a signal: we do not expect
a wakeup(9). Indicate this by sleeping on &nowake instead of
&p->p_p->ps_sigacts. We still need to loop here to handle spurious
wakeups, though.
Spurious wakeup case pointed out by kettenis@.
ok claudio@
|
|
ok semarie@
|
|
|
|
data from struct process anymore. This changes how siginfo and onstack
are accessed and make sendsig() more MP friendly.
With and OK semarie@ OK kettenis@
|
|
to p_siglist and so there is no need to check ps_siglist for the signal.
OK mpi@
|
|
If the timespec is zero-valued sys___thrsigdivert() should just do the
check for pending signals and return immediatly.
OK kettenis@
|
|
the signal handler code. Traditionally a process would spin in
such a case, but we changed the logic in revision 1.167 trapsignal()
to receive a fatal signal. If that happens to init(8), the kernel
panics. In case of reboot, jump between init signal handler and
page fault trap until the kernel resets the machine.
reported and tested weerd@; OK deraadt@
|
|
|
|
- Move the "hack" involving P_SINTR to avoid grabbing the SCHED_LOCK()
recursively closer to where it is necessary, in proc_stop()
- Introduce proc_unstop(), the symmetric routine to proc_stop(), which
manipulates `ps_xsig' and use it whenever a SSTOPed thread needs to be
awaken.
- Manipulate `ps_xsig' only in proc_stop/unstop()
ok kettenis@
|
|
No functional change.
ok semarie@
|
|
single_thread_set() is modified to explicitly indicated when waiting until
sibling threads are parked is required. This is obviously not required if
a traced thread is switching away from a CPU after handling a STOP signal.
ok claudio@
|
|
Kill SINGLE_PTRACE and use SINGLE_SUSPEND which has almost the same semantic.
This diff did not properly kill SINGLE_PTRACE and broke RAMDISK kernels.
|
|
Ze big lock is currently necessary to ensure that two sibling threads
are not racing against each other when processing signals. However it
is not strickly necessary to unpark sibling threads.
ok claudio@
|
|
single_thread_set() is modified to explicitly indicated when waiting until
sibling threads are parked is required. This is obviously not required if
a traced thread is switching away from a CPU after handling a STOP signal.
ok claudio@
|
|
This makes appear some redundant & racy checks.
ok semarie@
|
|
Use the SCHED_LOCK() to ensure `ps_thread' isn't being modified by a sibling
when entering tsleep(9) w/o KERNEL_LOCK().
ok visa@
|
|
We did not reach a consensus about using SMR to unlock single_thread_set()
so there's no point in keeping this change.
|
|
the SCHED_LOCK().
Putting a thread on a sleep queue is reduce to the following:
sleep_setup();
/* check condition or release lock */
sleep_finish();
Previous version ok cheloha@, jmatthew@, ok claudio@
|
|
Rename klist_{insert,remove}() to klist_{insert,remove}_locked().
These functions assume that the caller has locked the klist. The current
state of locking remains intact because the kernel lock is still used
with all klists.
Add new functions klist_insert() and klist_remove() that lock the klist
internally. This allows some code simplification.
OK mpi@
|
|
Make it obvious where the thread is blocked. "pause" is ambiguous.
Tweaked by kettenis@.
Probably ok kettenis@.
|
|
Currently all iterations are done under KERNEL_LOCK() and therefor use
the *_LOCKED() variant.
From and ok claudio@
|
|
Make sure `ps_single' is set only once by checking then updating it without
releasing the lock.
Analyzed by and ok claudio@
|
|
Panic reported by dhill@
|
|
Make sure `ps_single' is set only once by checking then updating it without
releasing the lock.
Analyzed by and ok claudio@
|
|
Simplify MD code and reduce the amount of recursion into the signal code
which helps when dealing with locks.
ok cheloha@, deraadt@
|
|
ok claudio@
|
|
struct sigacts since that is the only thing that is modified by siginit.
|
|
ok claudio@, pirofti@
|
|
Extend the scope of SCHED_LOCK() to better synchronize
single_thread_set(), single_thread_clear() and single_thread_check().
This prevents threads from suspending before single_thread_set() has
finished. If a thread suspended early, ps_singlecount might get
decremented too much, which in turn could make single_thread_wait()
get stuck.
The race could be triggered for example by trying to stop
a multithreaded process with a debugger. When triggered, the race
prevents the debugger from finishing a wait4(2) call on the debuggee.
This kind of gdb hang was reported by Julian Smith on misc@.
Unfortunately, single-thread mode switching still has issues and hangs
are still possible.
OK mpi@
|
|
ok kettenis@, visa@
|
|
The list can be accessed from interrupt context if a signal is sent
from an interrupt handler.
OK anton@ cheloha@ mpi@
|
|
subsystem and ps_klist handling still run under the kernel lock.
|
|
for example, with locking assertions.
OK mpi@, anton@
|
|
single_thread_check() safe to be called without KERNEL_LOCK().
single_thread_wait() needs to use sleep_setup() and sleep_finish()
instead of tsleep() to make sure no wakeup() is lost.
Input kettenis@, with and OK visa@
|
|
This ensures that the conditions checked are still in force. The sleep
breaks atomicity, allowing another thread to alter the state.
single_thread_set() should return immediately after sleep when called
from dowait4() because there is no guarantee that the process pr still
exists. When called from single_thread_set(), the process is that of
the calling thread, which prevents process pr from disappearing.
OK anton@, mpi@, claudio@
|
|
This shows that atomic_* operations should not be necessery to write
to this field unlike with the process one.
The advantage of using a somewhat-unique prefix for struct member is
moot when multiple definitions use the same prefix :o)
From Amit Kulkarni, ok claudio@
|
|
kern_sig.c where they are currently added by the include. While doing
that mark the sigprop array as const.
OK mpi@ anton@ millert@
|
|
proc0 which is used for kthreads and idle threads. proc0 and all those
other kernel threads don't handle signals so there is no benefit in sharing.
Simplifies the code a fair bit since the refcnt is gone.
OK kettenis@
|
|
|
|
adding more filter properties without cluttering the struct.
OK mpi@, anton@
|
|
interrupt is enough to defer the signal handling. This is a leftover
from the times where not all archs had generic soft interrupts.
It is possible that the defer signal handling to a soft interrupt will
be removed at a later stage.
Input anton@, mpi@ OK kettenis@
|
|
process.
ok bluhm@ claudio@ visa@
|
|
The 3 subsystems: signal, poll/select and kqueue can now be addressed
separatly.
Note that bpf(4) and audio(4) currently delay the wakeups to a separate
context in order to respect the KERNEL_LOCK() requirement. Sockets (UDP,
TCP) and pipes spin to grab the lock for the sames reasons.
ok anton@, visa@
|