Age | Commit message (Collapse) | Author |
|
ok jca@
|
|
of ELFDEFNNAME(NO_ADDR)
ok jca@
|
|
ok jca@
|
|
Reported by Ryan, pulse.purge at gmail.com
|
|
ok krw@
|
|
ok krw@
|
|
ok mpi@ henning@ sashan@
|
|
we need to make sure to clean the data and invalidate the instruction
cache upon entering a page with pmap_enter(). Since it is possible
that pmap_enter() does not directly enter the page, we need to do the
same dance in the pmap fault fixup code. Every new writeable mapping
or write removes a page's flag to mark it unflushed. The next time
pmap_enter() is called or a fault happens on that VA, it has to be
flushed and invalidated again. This was heavily discussed with Dale
Rahn.
On the Pine64 and Raspberry Pi 3 we have been very lucky to not run
into any cache issues, especially with the instruction cache. The
AMD Seattle seems to be a different kind of beast where we actually
have to care about these things. This finally brings the machine
into userland.
|
|
ok dlg@ a while ago
some input from jca@ who wrote the same diff
|
|
asynchronous callbacks. Make the IPsec functions void, there is
already a counter in the error path.
OK mpi@
|
|
pagetables as well. Also replace the number for write-back with a proper
define.
|
|
true. Instead, unless overwritten by the device tree, we should ask the
generic timer for its frequency. This fixes time on my AMD Seattle and
should improve time management on QEMU as well.
|
|
these currently to 255. Thus making it impossible to use higher IRQs
than that. The AMD Seattle SoC though seems to provide 448 IRQs, which
is kind of out of bounds, so raise them to the proper values. This
makes interrupts work on that machine.
|
|
failed. Add a counter for that case.
OK dhill@
|
|
by pre-allocating two cryptodesc objects and storing them in an array
instead of a linked list. If more than two cryptodesc objects are
required use mallocarray to fetch them. Adapt the drivers to the new
API.
This change results in one pool-get per ESP packet instead of three.
It also simplifies softraid crypto where more cryptodesc objects are
allocated than used.
From, with and ok markus@, ok bluhm@
"looks sane" mpi@
|
|
Found by Hrvoje Popovski.
|
|
useful to propagate the error. When an error occurs in an asynchronous
network path, incrementing a counter is the right thing. There are
four places where an error is not accounted, just add a comment for
now.
OK mpi@ visa@
|
|
ok mpi@
|
|
|
|
This prevents a deadlock with the X server and some wireless drivers.
The real fix is to take unix domain socket code out of the NET_LOCK().
Issue reported by pirofti@ and ajacoutot@
ok tb@, stsp@, pirofti@
|
|
this lets me pass the specific argument to an aen handler in mfii.
it also unbreaks the tree.
found by jmatthew@
|
|
this didnt make sense previously since the mbuf pools had item
limits that meant the cpus had to coordinate via a single counter
to make sure the limit wasnt exceeded.
mbufs are now limited by how much memory can be allocated for pages
from the system. individual pool items are no longer counted and
therefore do not have to be coordinated.
ok bluhm@ as part of a larger diff.
|
|
this replaces individual calls to pool_init, pool_set_constraints, and
pool_sethardlimit with calls to m_pool_init. m_pool_init inits the
mbuf pools with the mbuf pool allocator, and because of that doesnt
set per pool limits.
ok bluhm@ as part of a larger diff
|
|
m_pool_init is basically a call to pool_init with everythign except
the size and alignment specified, and a call to pool_set_constraints
so the memroy is always dma reachable. it also wires up the memory
with the custom mbuf pool allocator.
ok bluhm@ as part of a larger diff
|
|
the custom allocator is basically a wrapper around the multi page
pool allocator, but it has a single global memory limit managed by
the wrapper.
currently each of the mbuf pools has their own memory limit (or
none in the case of the myx pool) independent of the other pools.
this means each pool can allocate up to nmbclust worth of mbufs,
rather than all of them sharing the one limit. wrapping the allocator
like this means we can move to a single memory limit for all mbufs
in the system.
ok bluhm@ as part of a larger diff
|
|
|
|
|
|
more specificially we probe the disk if it goes from UNCONFIGURED_GOOD
to a SYSTEM disk, and detach it if goes from being a SYSTEM disk
to anything else.
this semantic comes from the lsi^Wavago code in the illumos mr_sas
driver. seems to work fine.
i think this covers all the ways a passthru disk can transition on
these boards.
|
|
|
|
this only handles MFI_EVT_PD_INSERTED_EXT and MFI_EVT_PD_REMOVED_EXT so
far. if this code is to be reused in mfi, it should probably change to
use MFI_EVT_PD_INSERTED and MFI_EVT_PD_REMOVED instead.
unlike mpii and mpi, it looks like the firmware aborts outstanding
commands against a disk when it's physically removed, so we dont
have to explicitly abort them. this is probably a carry over from
mfi generation boards which dont have an explicit abort command
they can use.
|
|
this submits MR_DCMD_CTRL_EVENT_WAIT commands via the async dcmd
path to read all events from boot onward, and eventually ends up
waiting after the boot messages are consumed.
right now none of the events are handled, but this can be added now
this framework is in place.
the board does generate human readable log messages for every event.
we can send them somewhere (dmesg or syslog for example), but for
now theyre masked by #if 0.
|
|
async dcmds are submitted via an mpii request (like the scsi commands
are) which uses the ccb_request buffer, meaning that the dcmd itself
has to go somewhere else. this reuses the sense buffer on the ccb
for the dcmd, and provides wrappers for accessing that space and
submitting a dcmd via the passthru command.
|
|
|
|
Now that we can attach and detach devices, we need to make sure we
can do so while interrupts are running. Thankfully, in the meantime
the refcnt_init(9) API came around to help us out.
|
|
|
|
|
|
|
|
into separate functions. This makes them reusable from other parts in
the kernel. Assembly and header are taken from FreeBSD, but modified
to fit our requirements and with some unnecessary stuff removed. While
there remove micro optimization for uniprocessor kernels.
|
|
OK mpi@
|
|
OK mpi@
|
|
|
|
profiling framework, for i386.
Code patching is used to enable probes when entering functions. The
probes will call a mcount()-like function to match the behavior of a
GPROF kernel.
A new sysctl knob, ddb.profile, need to be set to 1 in securelevel 0
to be able to use this feature.
ok jasper@, guenther@, mlarkin@
|
|
other archs.
ok patrick@
|
|
there's a struct timeout in scsi_xfer for this purpose, which is
used to schedule a timeout of the command in the future. the timeout
adds the xs to a list in mfii_softc of outstanding commands that
are to be aborted. this list is processed in a task so we can sleep
for an mfii_ccb. the new ccb is used to issue an abort against the
specific command that timed out.
to avoid having a timeout complete at the same time as a command
on the chip, a refcnt is added to ccbs. the chip and the timeout
get a ref each. the mfii completion path will attempt to timeout_del,
and if that's succesful it will subtract the timeouts ref as well
as its own. if it fails, the abort path owns the ccb and becomes
responsible for calling scsi_done on behalf of the mfii completion
path.
|
|
|
|
|
|
this has been pointed out to me by a couple of people now.
|
|
flag variable instead of checking some pointer assignment made earlier
in pae_bootstrap.
ok guenther
|
|
delete the no-longer-used probe hook support.
ok mpi@ jca@
|
|
This makes the API simpler, and is probably more useful than spreading
counters memory other several types, making it harder to track.
Prodded by mpi, ok mpi@ stsp@
|