Age | Commit message (Collapse) | Author |
|
device_unref() as found by deraadt@.
|
|
of UVM on PowerPC architectures by breaking pmap_is_referenced() and
friends.
ok kettenis@
|
|
include extra sync operations.
ok kettenis@
|
|
ok mpi@ deraadt@
|
|
processors that support it.
Due to the way trap code is patched it is currently not possible to
enabled/disable BAT at runtime.
ok miod@, kettenis@
|
|
ok miod@, kettenis@ as part of a larger diff
|
|
Use this on vax to correctly pick the end of the stack area now that the
stackgap adjustment code will no longer guarantee it is a fixed location.
|
|
with and ok miod@
|
|
each arch used to have to provide an rw_cas operation, but now we
have the rwlock code build its own version. on smp machines it uses
atomic_cas_ulong. on uniproc machines it avoids interlocked
instructions by using straight loads and stores. this is safe because
rwlocks are only used from process context and processes are currently
not preemptible in our kernel. so alpha/ppc/etc might get a benefit.
ok miod@ kettenis@ deraadt@
|
|
|
|
|
|
to the bus base address, that's not the case with radeondrm(9) cards
on G5.
From miod@
|
|
|
|
race. This will certainly be revisited, but too much time has been
spent on it for now.
ok mpi
|
|
other archs.
Specify the caching policy by passing PMAP_* flags to pmap_kenter_pa()
like the majority of our archs do and kill pmap_kenter_cache().
Spread some pmap_update() along the way.
While here remove the unused flag argument from pmap_fill_pte().
Finally convert the bus map/unmap functions to km_alloc/free() instead
of uvm_km_valloc/free().
Inputs from kettenis@ and miod@, ok miod@
|
|
kernel, so update pmap_extract() accordingly and save a VP lookup.
While here unify pted checks after the VP lookups.
ok miod@
|
|
This brings bus_space_mmap(9) to socppc and change its bus_space_map(9)
implementation to use kernel_map instead of phys_map like macppc and
everybody else.
|
|
unused typedef & external definitions.
|
|
ok kettenis@
|
|
for the kernel pmap and kill pmap_kremove_pg(). Finally guard the hash
lock code under "MULTIPROCESSOR" to explicit which part of the code
received some MP love.
ok kettenis@
|
|
This changes the logic to prevent a recursion when processing soft
interrupts. Previously a per-CPU flag was set before re-enabling
interrupts. Now the IPL level is raised to SOFTTTY which makes
splsoftassert() happy, greatly inspired by mips64.
As a side effect, the ppc_intr_{disable,enable}() dance is now done
only once instead of twice per splx(9).
While here, make use of dosoftint() instead of having 3 different
functions for dispatching soft interrupts.
Tested by deraadt@ on G4 smp and by myself G5 smp, G3, G4 and socppc.
No objection from the usual (and over busy) suspects.
|
|
pool allocator. pmapvp is 1024 bytes, and the size * 8 change in pools
without an allocator being specified tries to place it on large pages.
you need pmap to use large pages, and pmap isnt set up yet.
fixed a very early fault on macppc.
debugged with and tested by krw@
ok deraadt@ krw@
|
|
ok kettenis mpi
|
|
While here, use the direct map for pmap_copy_page() and remove the now
unused stolen page addresses.
No objection from the usual suspects, "it works, commit" deraadt@
|
|
mpi will investigate speedups after this.
ok mpi kettenis
|
|
PROT_NONE, PROT_READ, PROT_WRITE, and PROT_EXEC from mman.h.
PROT_MASK is introduced as the one true way of extracting those bits.
Remove UVM_ADV_* wrapper, using the standard names.
ok doug guenther kettenis
|
|
we have a proper X bit in the page tables. On 32-bit systems kernel .text is
handled by an IBAT, so we don't need page table entries that are executable
in the kernel pmap.
ok mpi@
|
|
to cover the first 8 MB of memory such that it covers kernel .text and not
much else. This is a first step towards W^X in the kernel for machines
with G4 and older processors.
ok mpi@
|
|
step towards W^X in the kernel, even though it is only effective on machines
with a G5 processor.
ok mpi@
|
|
while in the manpage add volatile where the code has it too.
ok miod@ guenther@
|
|
ok deraadt@, mlarkin@
|
|
base dance in inline assembly in various places.
tweak and ok miod@
|
|
months that I broke it before the 5.5 release.
confirmed as not being required by ports by sthen@, ajacoutot@, dcoppa@
|
|
sleep bit.
|
|
in the idle loop, in preparation for G5 support.
Only do a disable/enable interrupt dance if the running CPU supports a
sleep mode.
Fix entering ddb(8) from interrupt context by not modifying the return
address of the 'forced' trap frame.
While here, modify the existing logic to terminate prefetching of all
data streams if AltiVec is supported before setting the POW bit.
With inputs/explanations from drahn, looks ok to miod@
|
|
contexts with markers (---like on x86---) and print the associated type
or number when available.
While here, gyp' the support for process tracing (tr /p).
ok miod@
|
|
will be used in upcoming MP and idle support.
ok miod@
|
|
|
|
ok mpi@ sthen@
|
|
after discussions with beck deraadt kettenis.
|
|
* you can #include <sys/endian.h> instead of <machine/endian.h>,
and ditto <endian.h> (fixes code that pulls in <sys/endian.h> first)
* those will always export the symbols that POSIX specified for
<endian.h>, including the new {be,le}{16,32,64}toh() set. c.f.
http://austingroupbugs.net/view.php?id=162
if __BSD_VISIBLE then you also get the symbols that our <machine/endian.h>
currently exports (ntohs, NTOHS, dlg's bemtoh*, etc)
* when doing POSIX compiles (not __BSD_VISIBLE), then <netinet/in.h> and
<arpa/inet.h> will *stop* exporting the extra symbols like BYTE_ORDER
and betoh*
ok deraadt@
|
|
The new CPU_BUSY_CYCLE() may be put in a busy loop body so that CPU can reduce
power consumption, as Linux's cpu_relax() and FreeBSD's cpu_spinwait(). To
start minimally, use PAUSE on i386/amd64 and empty on others. The name is
chosen following the existing cpu_idle_*() functions. Naming and API may be
polished later.
OK kettenis@
|
|
includes
|
|
that user.h's tentacles fetched it even earlier.
|
|
It will no longer be pulled by uvm_extern.h in the short future.
ok jsg@
|
|
ok miod@, dlg@
|
|
ok mpi@
|
|
it needs to be done atomicly on some MP archs and we don't have
atomic_add_int() everywhere yet. Also, mi_ast() was meant to be inline.
noted by miod@
|
|
ok guenther
|
|
ok deraadt@
|