Age | Commit message (Collapse) | Author |
|
myself" panics that some people have seen over the last year-and-a-half.
Cherry picked from a more complex (and therefore scarier) diff from oga@.
ok tedu@, oga@
|
|
changes it was returing a constant 0, changing to cope
with those changes makes less sense then just removing
as it provides the user with no usefull information.
sthen@ grepped the port's tree for me and found not hits,
thanks!
OK deraadt@, matthew@
|
|
go back to something more like the previous design, and have the thread do
the heavy lifting. solves vmmaplk panics.
ok deraadt oga thib
[and even simple diffs are hard to get perfect. help from mdempsky and deraadt]
|
|
|
|
pools, sized by powers of 2, which are constrained to dma memory.
ok matthew tedu thib
|
|
regular buf routines; and now we can swap again.
|
|
This is more clear, and as thib pointed out, the default in softraid was
wrong. ok thib.
|
|
enough.
ok tedu@, art@
|
|
UVM_PLA_WAITOK as it will not fail; Rather assert that it didn't fail.
ok tedu@, oga@
|
|
ok tedu@, oga@
|
|
|
|
|
|
Use uvm_km_kmemalloc_pla with the dma constraint to allocate kernel stacks.
Yes, that means DMA is possible to kernel stacks, but only until we've fixed
all the scary drivers.
deraadt@ ok
|
|
as calls to uvm_km_free_wakup can end up in uvm_mapent_alloc which tries
to grab this mutex.
ok tedu@
|
|
no binary change.
|
|
Just like normal vs{,un}lock, but in case the pages we get are not dma
accessible, we bounce them, if they are dma acessible, the functions behave
exactly like normal vslock. The plan for the future is to have fault_wire
allocate dma acessible pages so that we don't need to bounce (especially
in cases where the same buffer is reused for physio over and over again),
but for now, keep it as simple as possible.
|
|
than we can realistically dma to.
In the swap encrypt case we already bounce through a intermediate buffer
for pageout, so just make sure that that buffer is constrained to
dmaable memory. In the other cases we check to see if the memory is
dmaable, then if not we bounce it.
ok beck@, art@, thib@.
|
|
|
|
|
|
ok oga@
|
|
ok tedu@, beck@, oga@
|
|
that md code can peek at it, and update m68k !__HAVE_PMAP_DIRECT setup code
to the recent uvm_km changes.
ok thib@
|
|
else.
ok thib@
|
|
recursion in pmap_enter as seen on zaurus.
ok art@
also, release a the uvm_km_page.mtx before calling uvm_km_kmemalloc as we
can sleep there.
ok oga@
|
|
which contains the constraints for DMA/memory allocation for each
architecture, and dma_constraints which contains the range of addresses
that are dma accessable by the system.
This is based on ariane@'s physcontig diff, with lots of bugfixes and
additions the following additions by my self:
Introduce a new function pool_set_constraints() which sets the address
range for which we allocate pages for the pool from, this is now used
for the mbuf/mbuf cluster pools to keep them dma accessible.
The !direct archs no longer stuff pages into the kernel object in
uvm_km_getpage_pla but rather do a pmap_extract() in uvm_km_putpages.
Tested heavily by my self on i386, amd64 and sparc64. Some tests on
alpha and SGI.
"commit it" beck, art, oga, deraadt
"i like the diff" deraadt
|
|
between an allocating process failing and waking up the pagedaemon
and the pagedaemon (since everything was dandy).
Rework the do ... while () logic searching for pages of a certain
memtype in a pmr into a while () loop where we check if we've found
enough pages and break out of the pmr and check the memtype inside
the loop. This prevents us from doing an early return without enough
pages for the caller even though more pages exist.
comments and help from oga, style nit from miod.
OK miod@, oga@
|
|
|
|
fix up prototypes etc.
ok oga@
|
|
|
|
uvm.pagedaemon_proc, do the wakeup on the
right ident.
this had been fixed, but the fix got backed
out during The Big Backout.
ok oga@
|
|
uvm_pdaemon.h has it was only holding that one prototype.
OK art@, oga@, miod@, deraadt@
|
|
with a spinlock (even vslocked() buffers may fault in the right
(complicated) situation).
We solve this by preallocating a bounded array for the response and copying the
data out when all locks have been released.
ok thib@, beck@
|
|
gets rid of #include <sys/dkio.h> in sys/ioctl.h and adds #include
<sys/dkio.h> to the places that actually want and use the disk
ioctls.
this became an issue when krw@'s X build failed when he was testing
a change to dkio.h.
tested by krw@
help from and ok miod@
|
|
I forgot that uvm_object.c wasn't build if SMALL_KERNEL. Fix this by building
the file unconditionally and only building the less used functions when
SMALL_KERNEL is not defined.
unbreaks ramdisk build. ok jsg@
|
|
places in the tree need to be touched to update the object
initialisation with respect to that.
So, make a function (uvm_initobj) that takes the refcount, object and
pager ops and does this initialisation for us. This should save on
maintainance in the future.
looked good to fgs@. Tedu complained about the British spelling but OKed
it anyway.
|
|
If when we have successfully swapped an aobj back in, then we release our
reference count, and that reference is the last reference, we will free the
the aobj and recursively lock the list lock.
Fix this by keeping track of the last object we had a reference on, and
releasing the refcount the next time we unlock the list lock.
Put a couple of comments in explaining lock ordering in this file.
noticed by, discussed with and ok guenther@.
|
|
where there is almost nothing left to them, so that we can continue getting
rid of them
ok oga
|
|
most importantly swapoff) over to a mutex. No idea how many times i've
written this diff in the past.
ok deraadt@
|
|
ok oga
|
|
no functional change. from Anton Maksimenkov
|
|
recommit pmemrange:
physmem allocator: change the view of free memory from single
free pages to free ranges. Classify memory based on region with
associated use-counter (which is used to construct a priority
list of where to allocate memory).
Based on code from tedu@, help from many.
Useable now that bugs have been found and fixed in most architecture's
pmap.c
ok by everyone who has done a pmap or uvm commit in the last year.
|
|
(because it pulls in so much of the world) so include it for now, but
mark it XXX
ok tedu
|
|
sysctl.h was reliant on this particular include, and many drivers included
sysctl.h unnecessarily. remove sysctl.h or add proc.h as needed.
ok deraadt
|
|
ok kettenis@ beck@ (tentatively) and ariane@. deraadt asked for it to be
commited now.
original commit message:
extend uvm_page_physload to have the ability to add "device" pages to
the system.
This is needed in the case where you need managed pages so you can
handle faulting and pmap_page_protect() on said pages when you manage
memory in such regions (i'm looking at you, graphics cards).
these pages are flagged PG_DEV, and shall never be on the freelists,
assert this. behaviour remains unchanged in the non-device case,
specifically for all archs currently in the tree we panic if called
after bootstrap.
ok art@ kettenis@, beck@
|
|
for use by the uvm pseg code. this is the path of least resistance until
we sort out how many of these functions we really need. problem found by mikeb
ok kettenis oga
|
|
re-add uvm_objwire and uvm_objunwire.
"you may commit that" kettenis@
original diff oked by ariane@ and art@
|
|
It was backed out as part of the date-based revert after c2k9.
"you can commit that" kettenis@
original diff oked by ariane@, art@.
|
|
base of data; with nicm@ ok miod@ guenther@
|
|
more correctly reflect the new state of the world - that is - how many pages
can be cheaply reclaimed - which now includes clean buffer cache pages.
This change fixes situations where people would be running with a large bufcachepercent, and still notice swapping without the buffer cache backing off.
ok oga@, testing by many on tech@ and others. Thanks.
|
|
- fixes ps(1)
- fixes kva deadbeef entries
|