Age | Commit message (Collapse) | Author |
|
outside the tree.
|
|
a) chooses incorrect kernel memory on the macppc
b) perhaps on zaurus too, which does not make it to copyright
c) was not tested on those platforms before commit
|
|
sel_addr &= ~(pmap_align - 1);
with pmap_align allowed to be 0 (no PMAP_PREFER) is a bad idea.
Fix this by a conditional.
ok oga@
|
|
The new world order of pmemrange makes this data completely redundant
(being dealt with by the pmemrange constraints instead). Remove all code
that messes with the freelist.
While touching every caller of uvm_page_physload() anyway, add the flags
argument to all callers (all but one is 0 and that one already used
PHYSLOAD_DEVICE) and remove the macro magic to allow callers to continue
without it.
Should shrink the code a bit, as well.
matthew@ pointed out some mistakes i'd made.
``freelist death, I like. Ok.' ariane@
`I agree with the general direction, go ahead and i'll fix any fallout
shortly'' miod@ (68k 88k and vax i could not check would build)
|
|
;
instead of
for (some; stuff; here);
reads easier.
ok ariane@
|
|
ok ariane@
|
|
This makes writing a diff that makes 64-bit unclean applications explode
a one-line diff.
ok deraadt
|
|
The old VM_MAP_RANGE_CHECK macro was wrong and caused code to be unreadable
(argument altering macros are harmful).
Each function now treats the memory range outside the map as it would treat
free memory: if it would error on being given free memory, it'll error
in a similar fashion when the start,end parameters fall outside the map.
If it would accept free memory in its argument range, it'll silently accept
the outside-map memory too.
Confirmed to help ports build machines.
|
|
vmmap is designed to perform address space randomized allocations,
without letting fragmentation of the address space go through the roof.
Some highlights:
- kernel address space randomization
- proper implementation of guardpages
- roughly 10% system time reduction during kernel build
Tested by alot of people on tech@ and developers.
Theo's machines are still happy.
|
|
Before we were only calling uao_dropswap() if there was a page, maning
that if the buffer was swapped out then we would leak the slot.
Quite rare because only pipebuffers should swap from the kernel object,
but i've seen panics that implied this had happened (alpha.p for example).
ok thib@ after a lot of discussion and checking the semantics.
|
|
it belongs to a world order that isn't here anymore. More importantly it
has been unused for a fair while now.
ok thib@
|
|
hashtable to keep the list of swap slots in use in. Only one of these
will be in use at any one tmie, so shave some bytes and make it a union.
ok thib@
|
|
This header defined three thing. two of which are unused throughout the tree,
the final one was the definition of the pagq head type, move that to uvm_page.h
and nuke the header
ok thib@. Thanks to krw@ for testing the hppa build for me.
|
|
Therefore set UVM_FLAG_FIXED and enforce this.
ok oga@
|
|
when he copied this code from uvm_km_putpage() into km_free().
Found independently by ariane@; ok deraadt@
|
|
ok deraadt@, miod@
|
|
|
|
The problems during the hackathon were not caused by this (most likely).
prodded by deraadt@ and beck@
|
|
ok miod
|
|
|
|
page so that um_anfree will free it for us.
uvm_anfree does a pmap_page_protect(, VM_PROT_NONE) just before it frees the
page, so we don't need to do it here ourselves.
ok ariane@
|
|
(uvm_atopg) and use it in uvm_km_doputpage to replace some handrolled
code. Shrinks the kernel a trivial amount.
ok beck@ and miod@ (who suggested i name it uvm_atopg not uvm_atop)
|
|
At various times diffs have had debugging that checked that we don't
insert a page into the tree on top of an existing page, leaking that
page's references. Until the recent hackathon (and introduction if
uvm_pagealloc_multi) the bufcache for example did a rb tree look up on
insert to check (under #ifdef DEBUG || 1) so instead just check it on
pageinsert every time, since RB_INSERT returns any duplicates so this
check is pretty much free.
``emphatically yes'' beck@
|
|
global hash I forgot to remove the has declarations from struct uvm. So
remove them now.
pointed out by blambert@, ok beck@
|
|
ok matthew@ tedu@, also eyeballed by at least krw@ oga@ kettenis@ jsg@
|
|
an uninitialized variable to uvm_km_free().
|
|
and we aren't sure what's causing them.
shouted oks by many before I even built a kernel with the diff.
|
|
ok ariane
|
|
code block (not 'high_next' but 'low').
While here, change the KASSERT to a KDASSERT.
Pointed out by Amit Kulkarni.
ok thib@, miod@
|
|
- Use km_alloc for all backend allocations in pools.
- Use km_alloc for the emergmency kentry allocations in uvm_mapent_alloc
- Garbage collect uvm_km_getpage, uvm_km_getpage_pla and uvm_km_putpage
ariane@ ok
|
|
Pointed out and ok mlarkin@
|
|
- Change a few KASSERT(0) into proper panics.
- Match the new behavior of single page freeing.
- kremove pages and then free them, it's safer.
thib@ ok
|
|
|
|
|
|
- Clarify a comment.
- Change all the flags to chars from ints to make the structs smaller.
|
|
to userland.
Wrap the checking code in #if NVND > 0 as pointed out
by miod.
ok beck@
ok deraadt@, krw@ (on an earlier diff)
|
|
We've reached the point where we have a dozen allocators that all do more
or less the same thing, but slightly different, with slightly different
semantics, slightly different default behaviors and default behaviors that
most callers don't expect or want. A random sample on the last general
hackathon showed that no one could explain what all the different allocators
did. And every time someone needed to do something slightly different a
new allocator was written.
Unify everything. One single function to allocate multiples of PAGE_SIZE
kernel memory. Four arguments: size, how va is allocated, how pa is allocated
and misc parameters. Same parameters are passed to the free function so that
we don't need to guess what's going on.
Functions are currently unused, we'll do one thing at a time to avoid a
giant commit.
looked at by lots of people, deraadt@ and beck@ are yelling at me to commit.
|
|
Allow reclaiming pages from all pools.
Allow zeroing all pages.
Allocate the more equal pig.
mlarking@ needs this.
Not called yet.
ok mlarkin@, theo@
|
|
Pointed out by Fred Crowson. ok ariane@
|
|
ok henning@
|
|
With this change bufcachepercent will be the percentage of dma reachable
memory that the buffer cache will attempt to use.
ok deraadt@ thib@ oga@
|
|
Bob needs this.
ok art@ bob@ thib@
|
|
- Fix error handling so that we free stuff on error.
- We use the mappings to keep track of which pages need to be
freed so don't unmap before freeing (this is theoretically
incorrect and will be fixed soon).
This makes fsck happy on bigmem machines (it doesn't leak all
dma:able memory anymore).
beck@, drahn@, oga@ ok
|
|
|
|
explicit_bzero() where required
ok markus mikeb
|
|
a physical address [more precisely, something suitable to pass to pmap_enter()'sphysical address argument].
This allows MI drivers to implement mmap() routines without having to know
about the pmap_phys_address() implementation and #ifdef obfuscation.
|
|
resort if mmap fails otherwise to enable more complete address space
utilization. tested for a while with no ill effects.
|
|
unrelated, and his alpha is much happier now.
OK deraadt@
|
|
heap gap from max data size. nothing else changes yet. ok deraadt
|
|
vaddr_t PMAP_PREFER(..., vaddr_t). This allows better compiler optimization
when the function is inlined, and avoids accessing memory on architectures
when we can pass function arguments in registers.
|