Age | Commit message (Collapse) | Author |
|
ok mpi@
|
|
Almost all db_read_bytes() callers cast the destination buffer
argument to char*, which suggests the API's prototype is incompatible
with how the API is actually used.
Change db_read_bytes() and db_write_bytes() to take a void* as the
destination/source buffer parameter so callers don't need to cast the
argument.
With input from bluhm@. Bugs caught by Clemens Gossnitzer (ASCII
approximation of name).
Thread: https://marc.info/?l=openbsd-tech&m=170740813021636&w=2
ok bluhm@
|
|
Use db_get_value() to access addr to ensure that alignment errors
don't cause exceptions. DDB on 32bit archs does normally not handle
64bit values so to print 64bit ints a bit of gymnastics is needed.
OK mpi@
|
|
Softdep has been a no-op for some time now, this removes it to get
it out of the way.
Flensing mostly done in Talinn, with some help from krw@
ok deraadt@
|
|
Include missing fields -- like the sleep channel and message -- and
show both the PID and TID of the proc.
Also add '/t' as an argument that can be used to specify a proc by TID
instead of by address.
OK mpi@
|
|
Every platform made the clockintr switch at least six months ago.
The __HAVE_CLOCKINTR symbol is now redundant. Remove it.
Prompted by claudio@.
Link: https://marc.info/?l=openbsd-tech&m=168826181015032&w=2
"makes sense" mlarkin@
|
|
ok miod@ millert@
|
|
clockintr(9) is a machine-independent clock interrupt scheduler. It
emulates most of what the machine-dependent clock interrupt code is
doing on every platform. Every CPU has a work schedule based on the
system uptime clock. For now, every CPU has a hardclock(9) and a
statclock(). If schedhz is set, every CPU has a schedclock(), too.
This commit only contains the MI pieces. All code is conditionally
compiled with __HAVE_CLOCKINTR. This commit changes no behavior yet.
At a high level, clockintr(9) is configured and used as follows:
1. During boot, the primary CPU calls clockintr_init(9). Global state
is initialized.
2. Primary CPU calls clockintr_cpu_init(9). Local, per-CPU state is
initialized. An "intrclock" struct may be installed, too.
3. Secondary CPUs call clockintr_cpu_init(9) to initialize their
local state.
4. All CPUs repeatedly call clockintr_dispatch(9) from the MD clock
interrupt handler. The CPUs complete work and rearm their local
interrupt clock, if any, during the dispatch.
5. Repeat step (4) until the system shuts down, suspends, or hibernates.
6. During resume, the primary CPU calls inittodr(9) and advances the
system uptime.
7. Go to step (2). This time around, clockintr_cpu_init(9) also
advances the work schedule on the calling CPU to skip events that
expired during suspend. This prevents a "thundering herd" of
useless work during the first clock interrupt.
In the long term, we need an MI clock interrupt scheduler in order to
(1) provide control over the clock interrupt to MI subsystems like
timeout(9) and dt(4) to improve their accuracy, (2) provide drivers
like acpicpu(4) a means for slowing or stopping the clock interrupt on
idle CPUs to conserve power, and (3) reduce the amount of duplicated
code in the MD clock interrupt code.
Before we can do any of that, though, we need to switch every platform
over to using clockintr(9) and do some cleanup.
Prompted by "the vmm(4) time bug," among other problems, and a
discussion at a2k19 on the subject. Lots of design input from
kettenis@. Early versions reviewed by kettenis@ and mlarkin@.
Platform-specific help and testing from kettenis@, gkoehler@,
mlarkin@, miod@, aoyama@, visa@, and dv@. Babysitting and spiritual
guidance from mlarkin@ and kettenis@.
Link: https://marc.info/?l=openbsd-tech&m=166697497302283&w=2
ok kettenis@ mlarkin@
|
|
The only caller of db_ctf_decompress() passes a size_t for the length.
This eliminates sign comparison warnings without using casts.
OK jca@ tb@
|
|
The kernel source assumes the original zlib ABI. There is no reason to
stick to this local change. Pull in a fix matching ctfdump.c -r1.26.
This is hopefully the last change necessary to undo a painful hack that
was committed 19 years ago without ok. Someone owes me a lot of beer...
ok millert
|
|
It makes uvm_swap_free() faster: extents have a cost of O(n*n) which doesn't
really scale with gigabytes of swap.
Based on initial work from mpi@
The blist implementation comes from DragonFlyBSD.
The diff adds also a ddb(4) 'show swap' command to show the blist and help
debugging, and fix some off-by-one in size printed during hibernate.
ok mpi@
|
|
accessible from ddb. Implement "show all routes" to print routing
tables, and "show route 0xfffffd807e9b0000" for a single route
entry. Note that the rtable id is not part of a route entry, so
it makes no sense to print it there.
OK deraadt@
|
|
If ddb.console is set and your serial console driver uses it, db_rint(),
lets you enter ddb(4) by typing the ESC D escape sequence. This is
useful for drivers like sfuart(4) where the hardware doesn't have a true
BREAK mechanism.
Suggested by miod@, ok kettenis@ miod@
|
|
ok jca@
|
|
Define a consistently named db_machine_command_table[] across all
archs that implement the MD "machine" command, and hook this into
the main command table instead of patching it at runtime.
ok mpi@ jca@
|
|
and "show all tdbs" in ddb.
tested by Hrvoje Popovski; OK mvs@
|
|
|
|
ok semarie@
|
|
this allows us to dynamically trace function boundaries with btrace by patching
prologues and epilogues with a breakpoint upon which the handler records the data,
sends it back to userland for btrace to consume.
currently it's hidden behind DDBPROF, and there is still a lot to cleanup and
improve, but basic scripts that observe return codes from a probed function
work.
from Tom Rollet, with various changes by me
feedback and ok mpi@
|
|
|
|
atomically which CPU is currently tracing.
OK cheloha@
|
|
Add a 512-byte buffer (ci_panicbuf) to each cpu_info struct on each
platform for use by panic(9). The first panic on a given CPU writes
its message to this buffer. Subsequent panics on a given CPU print
the panic message to the console but do not modify the buffer. This
aids debugging in two cases:
- If 2+ CPUs panic simultaneously there is no risk of garbled messages
in the panic buffer.
- If a CPU panics and then the operator causes a second panic while
using ddb(4), the operator can still recall the first failure on
a particular CPU.
Misc. changes to support this bigger change:
- Set panicstr atomically to identify the first CPU to reach panic().
- Tweak db_show_panic_cmd() to print all panic messages across all
CPUs. Prefix the first panic with an asterisk ('*').
- Prefer db_printf() to printf() during a panic if we have it.
Apparently it disturbs less global state.
- On amd64, tweak fault() to write the local panic buffer. This needs
more work.
Prompted by bluhm@ and deraadt@. Mostly written by deraadt@.
Discussed with bluhm@, deraadt@ and kettenis@.
Borne from a discussion on tech@ about making panic(9) more MP-safe:
https://marc.info/?l=openbsd-tech&m=162086462316143&w=2
ok kettenis@, visa@, bluhm@, deraadt@
|
|
|
|
ok visa
|
|
I missed the verbose pattern that it used for error checking the first
time around.
OK millert@
|
|
ok gkoehler@
|
|
ok kn
|
|
|
|
When ddb loads symbols, the .strtab contains char strings and doesn't
need long alignment. Our bootloader provides long alignment, but I
started loading symbols on powerpc64 without our bootloader.
ok mpi@ guenther@ kettenis@
|
|
require the debugger on most architectures, and the separation makes the
code easier to use from other subsystems.
The function definitions are still conditional to DDB. However, that
should not matter for now.
OK deraadt@, mpi@
|
|
and pass information to ddb. This helps to debug kernel NULL pointer
function calls.
input guenther@; OK kettenis@
|
|
Symbols not present in the CTF data are generally assembly routines which
have either no argument or do not follow the ABI that the various MD stack
unwinders understand.
This makes ddb(4) trace simpler to understand.
ok jasper@
|
|
ok deraadt@
|
|
Spotted by deraadt@
|
|
ok dlg@, jasper@, anton@
|
|
From Christian Ludwig <christian_ludwig at genua dot de>
ok visa@
|
|
inputted line just ends at sizeof(db_history), ddb started writing the
histories to out of the region. diff from IIJ.
ok deraadt anton
|
|
|
|
ok deraadt
|
|
|
|
referenced to using a pointer to long. When writing to such a variable,
cast it to the correct type. Writing would otherwise on 64-bit
architectures cause the next variable adjacent in memory to also be
modified.
ok deraadt@ visa@
|
|
corresponding digits. So the change the ddb x/x output.
OK sashan@ deraadt@ visa@ mpi@
|
|
|
|
OK jasper@
|
|
and indicate if a saved stack trace is empty.
OK guenther@
|
|
|
|
Currently there is only support for amd64, if this change settles
I will add support for the rest of the architectures.
OK kettenis@.
|
|
ok pirofti@
|
|
for blocks re-fetchable from the filesystem. However at reboot time,
filesystems are unmounted, and since processes lack backing store they
are killed. Since the scheduler is still running, in some cases init is
killed... which drops us to ddb [noted by bluhm]. Solution is to convert
filesystems to read-only [proposed by kettenis]. The tale follows:
sys_reboot() should pass proc * to MD boot() to vfs_shutdown() which
completes current IO with vfs_busy VB_WRITE|VB_WAIT, then calls VFS_MOUNT()
with MNT_UPDATE | MNT_RDONLY, soon teaching us that *fs_mount() calls a
copyin() late... so store the sizes in vfsconflist[] and move the copyin()
to sys_mount()... and notice nfs_mount copyin() is size-variant, so kill
legacy struct nfs_args3. Next we learn ffs_mount()'s MNT_UPDATE code is
sharp and rusty especially wrt softdep, so fix some bugs adn add
~MNT_SOFTDEP to the downgrade. Some vnodes need a little more help,
so tie them to &dead_vnops.
ffs_mount calling DIOCCACHESYNC is causing a bit of grief still but
this issue is seperate and will be dealt with in time.
couple hundred reboots by bluhm and myself, advice from guenther and
others at the hut
|
|
From Klemens Nanni.
|