Age | Commit message (Collapse) | Author |
|
it is also not part of an aobj.
Clear anon flags at pagefree: page is no longer part of an anon.
ok oga
|
|
Makes trace in ddb useful.
ok oga
|
|
uvm_km deals with kernel memory which is either part of one of the
kernel maps, or the main kernel object (a uao). If on km_pgremove we hit
a busy page, just sleep on it, if so there's some async io (and that is
unlikely). we can remove the check for uvm_km_alloc1() for a released page
since now we will never end up with a removed but released page in the kernel
map (due to the other chunk and the last diff).
ok ariane@. Diff survived several make builds, on amd64 and sparc64,
also forced paging with ariane's evil program.
|
|
Now, the PG_ RELEASED flag currently has two (maybe three) uses. The
valid one is for use with async io where we want to free the page after
we've paged it out. The other ones are "oh i'd like to free this, but
someone else is playing with it". It's simpler to just sleep on the
damned page instead and stop the fiddling.
First step does uao's: in uao_detach, sleep on the object and free it
when we're clean, instead of setting a flag so it's freed after. In
uao_flush, do the same. Change the interation over the object in flush
so that we don't have to add marker pages or other such voodoo to the
list when we sleep (netbsd did that when they had a similar diff), just
use the hash always. We can now change uao_releasepg() to just free the
page, and not bother with the KILLME stuff. When the other objects are
fixed this hook will vanish.
Much discussion with art@ over the idea, and ariane@ over this specific
diff. As mentioned, this one is based loosely on a similar idea in
netbsd.
Been in my tree for a while, survived many make builds, etc, and forcing
paging using ariane's evil program.
ok ariane@, beck@.
|
|
sleep on them (and otherwise ignore them) sleep on the pointer to the
{aiodoned,pagedaemon}_proc members, and nuke the two extra words.
"no objections" art@, ok beck@.
|
|
two cases of pool_get() + memset(0) -> pool_get(,,,PR_ZERO)
1.5 cases of global variables are already zeroed, so don't zero them.
ok ariane@, comments on stuff i'd missed from blambert@ and cnst@.
|
|
nothing uses this code yet, but might as well do it the right way.
"if you can't live without commiting this." miod@
|
|
ariane@ ok.
|
|
recursive in some cases (mostly involving swapping). A proper fix is in
the works, but this will unbreak kernels for now.
|
|
pages.
"go for it" miod@
|
|
needed.
"of course" art@.
|
|
of uvmexp.free.
"yeah, go for it" art@
|
|
addresses is another diff.
|
|
fraction of the wakeups and sleeps involved here actually grab that
lock. The remainder, on the other hand, always have the fpageq_lock
locked.
So, make this locking correct by switching the other users over to
fpageq_lock, too.
This would probably be better off being a semaphore, but for now at
least it's correct.
"ok, unless you want to implement semaphores" art@
|
|
For the possibility of sleeping, the first two flags are UVM_PLA_WAITOK
and UVM_PLA_NOWAIT. It is an error not to show intention, so assert that
one of the two is provided. Switch over every caller in the tree to
using the appropriate flag.
ok art@, ariane@
|
|
Fix up the one case of lock recursion (which blatantly ignored the
comment right above it saying that we don't need to lock). The rest of
the lock usage has been checked and appears to be correct.
ok ariane@.
|
|
PHYS_TO_VM_PAGE inline again. This should stop function call overhead
killing the vax and other slow archs while keeping the benefit for the
faster platforms.
suggested by miod. ok miod@, toby@.
|
|
the simple lock with a real lock - a IPL_BIO mutex. While i'm here, make
the sleeping condition one hell of a lot simpler in the aio daemon.
some ideas from and ok art@.
|
|
while there's io pending (async io makes that possible, but not often
hit), then we'll be waiting for the pgo_releasepg hook to free the
object when all of our pages disappear.
However, uvn_releasepg, while it does everything else that unreferencing
the object would do, it neglects to do the final vrele() on the vnode.
So in this rare situation we'd end up with the vnode waiting around
until it was forcibly recycled. Fix this by adding in the missing vrele().
ok thib@
|
|
unused.
ok art@.
|
|
into a IPL_VM blocking mutex, also slightly extend the locked area so
that it actually protects access to the page array (as the comment on
the lock declaration says it should).
ansify a few functions while i'm in the file.
"ok, even though you're sneaking in ansification in a diff. You dirty
you." art@
|
|
By pseudo-inline, I mean that if a certain macro was defined, they would
be inlined. However, no architecture defines that, and none has for a
very very long time. Therefore mainly this just makes the code a damned
sight easier to read. Some k&r -> ansi declarations while I'm in there.
"just commit it" art@. ok weingart@.
|
|
average arch port. They are also inline. This does not help, de-inline them.
shaves about 1k on i386 and amd64 bsd.mp. Probably similar amounts of
most architectures.
"no issue" beck@ "Nuke nuke nuke... make them functions" weingart@ "this
is good" art@
|
|
|
|
Since that function is now so small (2 lines), inline it into it's only user.
Shaves some bytes (104 on amd64).
ok deraadt@, blambert@. djm@ liked an earlier diff.
|
|
|
|
- Split up run queues so that every cpu has one.
- Make setrunqueue choose the cpu where we want to make this process
runnable (this should be refined and less brutal in the future).
- When choosing the cpu where we want to run, make some kind of educated
guess where it will be best to run (very naive right now).
Other:
- Set operations for sets of cpus.
- load average calculations per cpu.
- sched_is_idle() -> curcpu_is_idle()
tested, debugged and prodded by many@
|
|
K&R function declarations, so switch them all over to ansi-style, in
accordance with the prophesy.
"go for it" art@
|
|
NetBSD.
ok kurt@, drahn@, miod@
|
|
of uvm_km_pages.
ok deraadt@ tedu@
|
|
now and valid for __HAVE_PMAP_DIRECT archs only, though implements
both code paths.
Put it's code directly into the uvm_km_getpage for PMAP_DIRECT archs.
No functional change.
ok tedu, art
|
|
parameters such as cacheability, which is too different per-arch to be
MI.
discussed with miod, kettenis and art. ok miod@, art@.
|
|
|
|
Ok: miod, tedu
|
|
|
|
Ok miod, toby
|
|
|
|
ok miod@, art@
|
|
pages.
"looks good/no problems with it" tedu@ miod@ art@
|
|
``of course'' deraadt@.
|
|
|
|
|
|
|
|
add a new arg to the backend so it can tell pool to slow down. when we get
this flag, yield *after* putting the page in the pool's free list. whatever
we do, don't let the thread sleep.
this makes things better by still letting the thread run when a huge pf
request comes in, but without artificially increasing pressure on the backend
by eating pages without feeding them forward.
ok deraadt
|
|
changes the pressure on the uvm system, uncovering several bugs. Some
of those bugs result in provable deadlocks. We'll have to reconsider
integrating this diff again after fixing those bugs.
ok art@
|
|
not taken anymore, but it doesn't hurt to be correct.
from NetBSD, through mickey in pr 5812
prodded by otto@
|
|
no ok's from anyone because they are all slacking
|
|
a suitable range and ran out of memory segments. Oops.
|
|
This will allow us to escape the limitations of kmem_map.
At this moment, the per-type limits are still enforced for all sizes,
but we might loosen that limit in the future after some thinking.
Original diff from Mickey in kernel/5761 , I massaged it a little to
obey the per-type limits.
miod@ ok
|
|
Imagine lots of random small mappings (think malloc(3)) and sometimes
one large mapping (network buffer). If we've filled up our address space
enough, the random address picked for the large allocation is likely to
be overlapping an existing small allocation, so we'll do a linear scan
to find the next free address. That next free address is likely to
be just after a small allocation. Those two map entires get merged.
If we now allocate an amap for the merged map entry, it will be large.
When we later free the large allocation the amap is not truncated. All
these are design decisions that made sense for sbrk, but with random
allocations and malloc that actually returns memory, this really hurt us.
This is the reason why certain processes like apache and sendmail could
eat more than 10 times as much amap memory as they needed, eventually
hitting the malloc limit and hanging or running the machine out of
kmem_map and crashing.
otto@ ok
|