Age | Commit message (Collapse) | Author |
|
"Buffer cache pages are wired but not counted as such. Therefore we
have to set the wire count on the pages to 0 before we call
uvm_pagefree() on them, just like we do in buf_free_pages().
Otherwise the wired pages counter goes negative. While there, also
sprinkle some KASSERTs in there that buf_free_pages() has as well."
ok beck@ (again)
|
|
initial thread
ok jsing@ kettenis@
|
|
vm_page structs go into three trees, uvm_objtree, uvm_pmr_addr, and
uvm_pmr_size. all these have been moved to RBT code.
this should give us a decent chunk of code space back.
|
|
uvm_page_init() (causing uvmexp.npages to be sligthly wrong if
pmap_steal_memory() has been used) and uvm_page_physload().
ok guenther@ kettenis@ visa@ beck@
|
|
mtx_enter() and mtx_leave() operations. Not 100% this won't blow up but
there is only one way to find out, and we need this to make progress on
further unlocking uvm.
prodded by deraadt@
|
|
consistency with PQ_AOBJ.
Input kettenis@, ok beck@
|
|
the page loaning code is already in the Attic.
ok kettenis@, beck@
|
|
|
|
a value so that they may be called with UVM_PLA_NOWAIT
ok kettenis@
|
|
the idle thread.
ok deraadt@
|
|
have any direct symbols used. Tested for indirect use by compiling
amd64/i386/sparc64 kernels.
ok tedu@ deraadt@
|
|
|
|
mappable memory (direct or via execve), perhaps because of the address
allocator behind maps and the way wiring counts work?
|
|
ok tedu@, guenther@, miod@
|
|
comment (one is fixed, one is deleted).
ok kettenis beck
|
|
PROT_NONE, PROT_READ, PROT_WRITE, and PROT_EXEC from mman.h.
PROT_MASK is introduced as the one true way of extracting those bits.
Remove UVM_ADV_* wrapper, using the standard names.
ok doug guenther kettenis
|
|
on the 2nd of February 2011 in NetBSD.
http://marc.info/?l=netbsd-source-changes&m=129658899212732&w=2
http://marc.info/?l=netbsd-source-changes&m=129659095515558&w=2
http://marc.info/?l=netbsd-source-changes&m=129659157916514&w=2
http://marc.info/?l=netbsd-source-changes&m=129665962324372&w=2
http://marc.info/?l=netbsd-source-changes&m=129666033625342&w=2
http://marc.info/?l=netbsd-source-changes&m=129666052825545&w=2
http://marc.info/?l=netbsd-source-changes&m=129666922906480&w=2
http://marc.info/?l=netbsd-source-changes&m=129667725518082&w=2
|
|
emphatic ok usual suspects, grudging ok miod
|
|
|
|
will come back soon.
ok deraadt@
|
|
set the wire count on the pages to 0 before we call uvm_pagefree() on them,
just like we do in buf_free_pages(). Otherwise the wired pages counter goes
negative. While there, also sprinkle some KASSERTs in there that
buf_free_pages() has as well.
ok beck@
|
|
This change splits the buffer cache free lists into lists of dma reachable
buffers and high memory buffers based on the ranges returned by pmemrange.
Buffers move from dma to high memory as they age, but are flipped to dma
reachable memory if IO is needed to/from and high mem buffer. The total
amount of buffers allocated is now bufcachepercent of both the dma and
the high memory region.
This change allows the use of large buffer caches on amd64 using more than
4 GB of memory
ok tedu@ krw@ - testing by many.
|
|
|
|
|
|
machines where atomic ops aren't so simple.
ok beck deraadt miod
|
|
step 3 - re-merge 1.116 to 1.118
|
|
step 2 - re-merge 1.119 (the WAITOK diff)
|
|
step 1 - backout 1.116 to 1.119
|
|
This fix actually by mikeb@, this needs thorough testing to verify
it doesn't bring up other issues in what it hid.
ok deraadt@
|
|
Spotted by oga@nicotinebsd.org, with help from dhill@. Fix by me.
ok miod@
|
|
of times. No function change but makes the code a bit smaller.
ok mpi@
|
|
instead of two, building upon the knowledge of the state uvm_pagealloc_pg()
leaves the uvm_page in.
ok mpi@
|
|
It was removed as this function was redone to use pmemrange in mid 2010
with the result that kernel malloc and other users of this function can
consume the page daemon reserve and run us out of memory.
ok kettenis@
|
|
back it out.
|
|
More and more things are allocating outside of uvm_pagealloc these days making
it easy for something like the buffer cache to eat your last page with no
repercussions (other than a hung machine, of course).
ok ariane@ also ok ariane@ again after I spotted and fixed a possible underflow
problem in the calculation.
|
|
1) Make the pagedaemon aware of the memory ranges and size of allocations
where memory is being requested, and pass this information on to
bufbackoff(), which will later (not yet) be used to ensure that the
buffer cache gets out of the way in the right area of memory.
Note that this commit does not yet make it *do* that - as currently
the buffer cache is all in dma-able memory and it will simply back
off.
2) Add uvm_pagerealloc_multi - to be used by the buffer cache code
for reallocating pages to particular regions.
much of this work by ariane, with smatterings of me, art,and oga
ok oga@, thib@, ariane@, deraadt@
|
|
The vm hackers don't use it, don't maintain it and have to look at it all the
time. About time this 800 lines of code hit /dev/null.
``never liked it'' tedu@. ariane@ was very happy when i told her i wrote
this diff.
|
|
This is no function change since aobjs never actually hit this path. (also it is
my bug from a while ago).
ok ariane@
|
|
college uvm_pglist.c
uvm_pglistalloc and free are just thin wrappers around pmemrange these
days and don't really need their own file.
ok ariane@
|
|
The new world order of pmemrange makes this data completely redundant
(being dealt with by the pmemrange constraints instead). Remove all code
that messes with the freelist.
While touching every caller of uvm_page_physload() anyway, add the flags
argument to all callers (all but one is 0 and that one already used
PHYSLOAD_DEVICE) and remove the macro magic to allow callers to continue
without it.
Should shrink the code a bit, as well.
matthew@ pointed out some mistakes i'd made.
``freelist death, I like. Ok.' ariane@
`I agree with the general direction, go ahead and i'll fix any fallout
shortly'' miod@ (68k 88k and vax i could not check would build)
|
|
it belongs to a world order that isn't here anymore. More importantly it
has been unused for a fair while now.
ok thib@
|
|
At various times diffs have had debugging that checked that we don't
insert a page into the tree on top of an existing page, leaking that
page's references. Until the recent hackathon (and introduction if
uvm_pagealloc_multi) the bufcache for example did a rb tree look up on
insert to check (under #ifdef DEBUG || 1) so instead just check it on
pageinsert every time, since RB_INSERT returns any duplicates so this
check is pretty much free.
``emphatically yes'' beck@
|
|
ok henning@
|
|
With this change bufcachepercent will be the percentage of dma reachable
memory that the buffer cache will attempt to use.
ok deraadt@ thib@ oga@
|
|
Bob needs this.
ok art@ bob@ thib@
|
|
Bogus chunks pointed out by matthew@ and miod@. No cookies for
marco@ and jasper@.
ok deraadt@ miod@ matthew@ jasper@ macro@
|
|
which contains the constraints for DMA/memory allocation for each
architecture, and dma_constraints which contains the range of addresses
that are dma accessable by the system.
This is based on ariane@'s physcontig diff, with lots of bugfixes and
additions the following additions by my self:
Introduce a new function pool_set_constraints() which sets the address
range for which we allocate pages for the pool from, this is now used
for the mbuf/mbuf cluster pools to keep them dma accessible.
The !direct archs no longer stuff pages into the kernel object in
uvm_km_getpage_pla but rather do a pmap_extract() in uvm_km_putpages.
Tested heavily by my self on i386, amd64 and sparc64. Some tests on
alpha and SGI.
"commit it" beck, art, oga, deraadt
"i like the diff" deraadt
|
|
recommit pmemrange:
physmem allocator: change the view of free memory from single
free pages to free ranges. Classify memory based on region with
associated use-counter (which is used to construct a priority
list of where to allocate memory).
Based on code from tedu@, help from many.
Useable now that bugs have been found and fixed in most architecture's
pmap.c
ok by everyone who has done a pmap or uvm commit in the last year.
|
|
sysctl.h was reliant on this particular include, and many drivers included
sysctl.h unnecessarily. remove sysctl.h or add proc.h as needed.
ok deraadt
|
|
ok kettenis@ beck@ (tentatively) and ariane@. deraadt asked for it to be
commited now.
original commit message:
extend uvm_page_physload to have the ability to add "device" pages to
the system.
This is needed in the case where you need managed pages so you can
handle faulting and pmap_page_protect() on said pages when you manage
memory in such regions (i'm looking at you, graphics cards).
these pages are flagged PG_DEV, and shall never be on the freelists,
assert this. behaviour remains unchanged in the non-device case,
specifically for all archs currently in the tree we panic if called
after bootstrap.
ok art@ kettenis@, beck@
|