Age | Commit message (Collapse) | Author |
|
provides an inline version of it.
|
|
(as this trips assertwaitok() in pool_get()).
This should get revisited soon.
"Commit it!" from many, as people like to be able to hit swap
without havoc.
|
|
ok miod@, oga@, tedu@
|
|
have been resolved.
|
|
vector setup that has questionable features (that have, as far as I can
tell never been used in practice, atleast not in OpenBSD), remove all
the gunk and favor a simple struct full of function pointers that get
set directly by each of the filesystems.
Removes gobs of ugly code and makes things simpler by a magnitude.
The only downside of this is that we loose the vnoperate feature so
the spec/fifo operations of the filesystems need to be kept in sync
with specfs and fifofs, this is no big deal as the API it self is pretty
static.
Many thanks to armani@ who pulled an earlier version of this diff to
current after c2k10 and Gabriel Kihlman on tech@ for testing.
Liked by many. "come on, find your balls" deraadt@.
|
|
instead of doing work in the biodone callback for swapping
to file I/O, schedule the work to be done by the system
workq as it will call VOP_STRATEGY() in which we must be
allowed to sleep.
Thanks to Gabriel Kihlman for testing and spotting a bug in
the first version of this diff!
OK beck@, oga@
|
|
|
|
ok art@, oga@
|
|
Bogus chunks pointed out by matthew@ and miod@. No cookies for
marco@ and jasper@.
ok deraadt@ miod@ matthew@ jasper@ macro@
|
|
myself" panics that some people have seen over the last year-and-a-half.
Cherry picked from a more complex (and therefore scarier) diff from oga@.
ok tedu@, oga@
|
|
changes it was returing a constant 0, changing to cope
with those changes makes less sense then just removing
as it provides the user with no usefull information.
sthen@ grepped the port's tree for me and found not hits,
thanks!
OK deraadt@, matthew@
|
|
go back to something more like the previous design, and have the thread do
the heavy lifting. solves vmmaplk panics.
ok deraadt oga thib
[and even simple diffs are hard to get perfect. help from mdempsky and deraadt]
|
|
|
|
pools, sized by powers of 2, which are constrained to dma memory.
ok matthew tedu thib
|
|
regular buf routines; and now we can swap again.
|
|
This is more clear, and as thib pointed out, the default in softraid was
wrong. ok thib.
|
|
enough.
ok tedu@, art@
|
|
UVM_PLA_WAITOK as it will not fail; Rather assert that it didn't fail.
ok tedu@, oga@
|
|
ok tedu@, oga@
|
|
|
|
|
|
Use uvm_km_kmemalloc_pla with the dma constraint to allocate kernel stacks.
Yes, that means DMA is possible to kernel stacks, but only until we've fixed
all the scary drivers.
deraadt@ ok
|
|
as calls to uvm_km_free_wakup can end up in uvm_mapent_alloc which tries
to grab this mutex.
ok tedu@
|
|
no binary change.
|
|
Just like normal vs{,un}lock, but in case the pages we get are not dma
accessible, we bounce them, if they are dma acessible, the functions behave
exactly like normal vslock. The plan for the future is to have fault_wire
allocate dma acessible pages so that we don't need to bounce (especially
in cases where the same buffer is reused for physio over and over again),
but for now, keep it as simple as possible.
|
|
than we can realistically dma to.
In the swap encrypt case we already bounce through a intermediate buffer
for pageout, so just make sure that that buffer is constrained to
dmaable memory. In the other cases we check to see if the memory is
dmaable, then if not we bounce it.
ok beck@, art@, thib@.
|
|
|
|
|
|
ok oga@
|
|
ok tedu@, beck@, oga@
|
|
that md code can peek at it, and update m68k !__HAVE_PMAP_DIRECT setup code
to the recent uvm_km changes.
ok thib@
|
|
else.
ok thib@
|
|
recursion in pmap_enter as seen on zaurus.
ok art@
also, release a the uvm_km_page.mtx before calling uvm_km_kmemalloc as we
can sleep there.
ok oga@
|
|
which contains the constraints for DMA/memory allocation for each
architecture, and dma_constraints which contains the range of addresses
that are dma accessable by the system.
This is based on ariane@'s physcontig diff, with lots of bugfixes and
additions the following additions by my self:
Introduce a new function pool_set_constraints() which sets the address
range for which we allocate pages for the pool from, this is now used
for the mbuf/mbuf cluster pools to keep them dma accessible.
The !direct archs no longer stuff pages into the kernel object in
uvm_km_getpage_pla but rather do a pmap_extract() in uvm_km_putpages.
Tested heavily by my self on i386, amd64 and sparc64. Some tests on
alpha and SGI.
"commit it" beck, art, oga, deraadt
"i like the diff" deraadt
|
|
between an allocating process failing and waking up the pagedaemon
and the pagedaemon (since everything was dandy).
Rework the do ... while () logic searching for pages of a certain
memtype in a pmr into a while () loop where we check if we've found
enough pages and break out of the pmr and check the memtype inside
the loop. This prevents us from doing an early return without enough
pages for the caller even though more pages exist.
comments and help from oga, style nit from miod.
OK miod@, oga@
|
|
|
|
fix up prototypes etc.
ok oga@
|
|
|
|
uvm.pagedaemon_proc, do the wakeup on the
right ident.
this had been fixed, but the fix got backed
out during The Big Backout.
ok oga@
|
|
uvm_pdaemon.h has it was only holding that one prototype.
OK art@, oga@, miod@, deraadt@
|
|
with a spinlock (even vslocked() buffers may fault in the right
(complicated) situation).
We solve this by preallocating a bounded array for the response and copying the
data out when all locks have been released.
ok thib@, beck@
|
|
gets rid of #include <sys/dkio.h> in sys/ioctl.h and adds #include
<sys/dkio.h> to the places that actually want and use the disk
ioctls.
this became an issue when krw@'s X build failed when he was testing
a change to dkio.h.
tested by krw@
help from and ok miod@
|
|
I forgot that uvm_object.c wasn't build if SMALL_KERNEL. Fix this by building
the file unconditionally and only building the less used functions when
SMALL_KERNEL is not defined.
unbreaks ramdisk build. ok jsg@
|
|
places in the tree need to be touched to update the object
initialisation with respect to that.
So, make a function (uvm_initobj) that takes the refcount, object and
pager ops and does this initialisation for us. This should save on
maintainance in the future.
looked good to fgs@. Tedu complained about the British spelling but OKed
it anyway.
|
|
If when we have successfully swapped an aobj back in, then we release our
reference count, and that reference is the last reference, we will free the
the aobj and recursively lock the list lock.
Fix this by keeping track of the last object we had a reference on, and
releasing the refcount the next time we unlock the list lock.
Put a couple of comments in explaining lock ordering in this file.
noticed by, discussed with and ok guenther@.
|
|
where there is almost nothing left to them, so that we can continue getting
rid of them
ok oga
|
|
most importantly swapoff) over to a mutex. No idea how many times i've
written this diff in the past.
ok deraadt@
|
|
ok oga
|
|
no functional change. from Anton Maksimenkov
|
|
recommit pmemrange:
physmem allocator: change the view of free memory from single
free pages to free ranges. Classify memory based on region with
associated use-counter (which is used to construct a priority
list of where to allocate memory).
Based on code from tedu@, help from many.
Useable now that bugs have been found and fixed in most architecture's
pmap.c
ok by everyone who has done a pmap or uvm commit in the last year.
|