Age | Commit message (Collapse) | Author |
|
recursion in pmap_enter as seen on zaurus.
ok art@
also, release a the uvm_km_page.mtx before calling uvm_km_kmemalloc as we
can sleep there.
ok oga@
|
|
which contains the constraints for DMA/memory allocation for each
architecture, and dma_constraints which contains the range of addresses
that are dma accessable by the system.
This is based on ariane@'s physcontig diff, with lots of bugfixes and
additions the following additions by my self:
Introduce a new function pool_set_constraints() which sets the address
range for which we allocate pages for the pool from, this is now used
for the mbuf/mbuf cluster pools to keep them dma accessible.
The !direct archs no longer stuff pages into the kernel object in
uvm_km_getpage_pla but rather do a pmap_extract() in uvm_km_putpages.
Tested heavily by my self on i386, amd64 and sparc64. Some tests on
alpha and SGI.
"commit it" beck, art, oga, deraadt
"i like the diff" deraadt
|
|
for use by the uvm pseg code. this is the path of least resistance until
we sort out how many of these functions we really need. problem found by mikeb
ok kettenis oga
|
|
whether removing holes or parts of them is allowed or not.
Only allow hole removal in uvmspace_free(), when tearing the vmspace down.
ok art@
|
|
This has has been tested very very thoroughly on all archs we have
excepting 88k and 68k. Please see cvs log for the individual commit
messages.
ok beck@, thib@
|
|
More backouts in line with previous ones, this appears to bring us back to a
stable condition.
A machine forced to 64mb of ram cycled 10GB through swap with this diff
and is still running as I type this. Other tests by ariane@ and thib@
also seem to show that it's alright.
ok deraadt@, thib@, ariane@
|
|
This is for the same reason as the earlier backouts, to avoid the bug
either added or exposed sometime around c2k9. This *should* be the last
one.
prompted by deraadt@
ok ariane@
|
|
uvm_km deals with kernel memory which is either part of one of the
kernel maps, or the main kernel object (a uao). If on km_pgremove we hit
a busy page, just sleep on it, if so there's some async io (and that is
unlikely). we can remove the check for uvm_km_alloc1() for a released page
since now we will never end up with a removed but released page in the kernel
map (due to the other chunk and the last diff).
ok ariane@. Diff survived several make builds, on amd64 and sparc64,
also forced paging with ariane's evil program.
|
|
of uvm_km_pages.
ok deraadt@ tedu@
|
|
now and valid for __HAVE_PMAP_DIRECT archs only, though implements
both code paths.
Put it's code directly into the uvm_km_getpage for PMAP_DIRECT archs.
No functional change.
ok tedu, art
|
|
add a new arg to the backend so it can tell pool to slow down. when we get
this flag, yield *after* putting the page in the pool's free list. whatever
we do, don't let the thread sleep.
this makes things better by still letting the thread run when a huge pf
request comes in, but without artificially increasing pressure on the backend
by eating pages without feeding them forward.
ok deraadt
|
|
example an ioctl that loads bazillions of entries into a pf table) it
would exhaust the pool of free pages and not let uvm_km_thread catch
up until the pool was actually empty. This could be bad for non-sleeping
allocators since they can't wait for the memory while the big hog
can.
Instead of letting the syscall exhaust the pool, detect when we fall below
the low watermark, wake the thread, sleep once and let the thread
catch up. This paces the huge consumer so that the more critical consumers
never find an exhausted pool of pages.
"seems reasonable" kettenis@
|
|
|
|
|
|
when we hit swap before actually fully populating the buffer cache which
would lead to deadlocks.
From pedro, tested by many, deraadt@ ok
|
|
is aligned just fine and in case we allocate the last piece of the
address space we don't want wrap-around to cause us to fail.
pointed out by and ok miod@
|
|
uvm_km_kmemalloc.
"should probbaly go in" millert@, "I think it should too" deraadt@
|
|
miod@ ok
|
|
miod@ ok
|
|
miod@ ok
|
|
to separate locking, on most modern machines this is not enough
since operations on short types touch other short types that share the
same word in memory.
Merge pg_flags and pqflags again and now use atomic operations to change
the flags. Also bump wire_count to an int and pg_version might go
int as well, just for alignment.
tested by many, many. ok miod@
|
|
kmem_object) just so that we can remove them, just use pmap_extract
to get the pages to free and simplify a lot of code to not deal with
the list of intrsafe maps, intrsafe objects, etc.
miod@ ok
|
|
to "pg_flags" and "pg_version", so that they are a bit easier to work with.
Whoever uses generic names like this for a popular struct obviously doesn't
read much code.
Most architectures compile and there are no functionality changes.
deraadt@ ok ("if something fails to compile, we fix that by hand")
|
|
eyeballed by miod@ and pedro@
|
|
|
|
pass zero; this will be used shortly. From art@
|
|
no change for normal code
|
|
miod@ ok
|
|
physmem); miod@ toby@ ok
|
|
|
|
|
|
|
|
pages a process uses. this is now the userland "data size" value.
ok art deraadt tdeval. thanks testers.
|
|
scenarios, instead generating an ENOMEM backfeed, ok tedu@, prodded by many
|
|
what i intended all along, without contrived arithmetic screw up.
from discussions with mickey and deraadt
|
|
|
|
|
|
break out uvm_km_page bits for this case, no thread here
lots of testing tech@, deraadt@, naddy@, mickey@, ...
|
|
|
|
change both the nointr and default pool allocators to using uvm_km_getpage.
change pools to default to a maxpages value of 8, so they hoard less memory.
change mbuf pools to use default pool allocator.
pools are now more efficient, use less of kmem_map, and a bit faster.
tested mcbride, deraadt, pedro, drahn, miod to work everywhere
|
|
|
|
tested by jmc, brad, hshoexer
|
|
an interrupt safe thread.
use this as the new backend for mbpool and mclpool, eliminating the mb_map.
introduce a sysctl kern.maxclusters which controls the limit of clusters
allocated.
testing by many people, works everywhere but m68k. ok deraadt@
this essentially deprecates the NMBCLUSTERS option, don't use it.
this should reduce pressure on the kmem_map and the uvm reserve of static
map entries.
|
|
all architectures but arm, where it is needed.
|
|
uvm_unmap, uvm_deallocate and a few other functions.
Simplifies some code and reduces diff to the UBC branch.
|
|
and return a VM_PAGE. This is to allow sparc64 to cheaply record the
VAC color for those pages.
|
|
|
|
this prevents i-cache preload on some archs,
but does not hurt on others anyway.
art looked all over all the pmaps,
miod and mickey tested it on all possible archs,
deraadt made a lesson out of it for the rest of the folks.
|
|
known of it; and since the commit message does not give the rest of us
any feeling that this was tested by anyone, this is being removed. This
is not an area where one commits because just art agrees. And that is
what the commit message says.
|
|
art@ ok
|