Age | Commit message (Collapse) | Author |
|
|
|
in the buffer lists by removing a buffer from the hash twice. Problem
identified in discussion with Alexander Bluhm <Alexander_Bluhm@genua.de>.
|
|
no point in placing the buffer in the vnode's clean list just to remove
it afterwards. Talked over with art@, various testing for a while.
|
|
|
|
no change in compiler assembly output.
|
|
|
|
|
|
|
|
|
|
ok millert@ tedu@
|
|
encapsulating all such access into wall-defined functions
that makes sure locking is done as needed.
It also cleans up some uses of wall time vs. uptime some
places, but there is sure to be more of these needed as
well, particularily in MD code. Also, many current calls
to microtime() should probably be changed to getmicrotime(),
or to the {,get}microuptime() versions.
ok art@ deraadt@ aaron@ matthieu@ beck@ sturm@ millert@ others
"Oh, that is not your problem!" from miod@
|
|
rescinded 22 July 1999. Proofed by myself and Theo.
|
|
blinded you to the fact you were breaking ALL of our install media!
|
|
|
|
|
|
|
|
|
|
|
|
Mark biodone with splassert(IPL_BIO).
|
|
"should be called at splbio()"
|
|
It calls vwakeup and vwakeup is marked as "must be at splbio".
|
|
|
|
|
|
well (not at all) with shortages of the vm_map where the pages are mapped
(usually kmem_map).
Try to deal with it:
- group all information the backend allocator for a pool in a separate
struct. The pool will only have a pointer to that struct.
- change the pool_init API to reflect that.
- link all pools allocating from the same allocator on a linked list.
- Since an allocator is responsible to wait for physical memory it will
only fail (waitok) when it runs out of its backing vm_map, carefully
drain pools using the same allocator so that va space is freed.
(see comments in code for caveats and details).
- change pool_reclaim to return if it actually succeeded to free some
memory, use that information to make draining easier and more efficient.
- get rid of PR_URGENT, noone uses it.
|
|
machines or some configurations or in some phase of the moon (we actually
don't know when or why) files disappeared. Since we've not been able to
track down the problem in two weeks intense debugging and we need -current
to be stable, back out everything to a state it had before UBC.
We apologise for the inconvenience.
|
|
|
|
|
|
code is written mostly by Chuck Silvers <chuq@chuq.com>/<chs@netbsd.org>.
Tested for the past few weeks by many developers, should be in a pretty stable
state, but will require optimizations and additional cleanups.
|
|
|
|
While in the area, convert nfs node allocation from malloc to pool and do
some cleanups.
Based on the UBC changes in NetBSD. niklas@ ok.
|
|
|
|
(Look ma, I might have broken the tree)
|
|
|
|
Add a new DEBUG function "buf_print" that prints the contents of struct buf.
|
|
we are below the low watermark or if we should try to use up all buffers.
|
|
occurs on the fs with large block size. We can have situation where
numcleanbufs < locleanbufs and numdirtybufs < hidirtybufs. So, buffer
flushing daemon never wakeups and other processes asleep forever waiting
for a clean buffers. We count pages only for the dirty buffers which are
on freelist(BQ_DIRTY).
niklas@ found this.
Rename flasher to cleaner. Suggested by costa@.
Discussed with niklas@, costa@, millert@, art@.
Ok deraadt@.
|
|
|
|
|
|
problem when syncer can't do its work because of vnode locks (PR1983).
This also solves our problem where bigger number of buffers results in a
much worse perfomance. In my configuration (i386, 128mb, BUFCACHEPERCENT=35)
this speedup tar -xzf ports.tar.gz in 2-4 times. In configuration with
low number of buffers and softupdates this may slowdown some operations
up to 15%.
The major difference with current buffer cache is that new implementation
uses separate queues for dirty and clean buffers. I.e. BQ_LRU and BQ_AGE
replaced by BQ_CLEAN and BQ_DIRTY. This simplifies things a lot and
doesn't affect perfomance in a bad manner.
Thanks to art and costa for pointing on errors.
Tested by brad, millert, naddy, art, jj, camield
art, millert ok.
|
|
|
|
a new buffer and indicate if it sleep while getting that buffer.
This isn't make a much sense, but farther modifications will use it.
Work by art@
|
|
CLSIZE -> 1
CLBYTES -> PAGE_SIZE
OLOFSET -> PAGE_MASK
etc.
At the same time some archs needed some cleaning in vmparam.h so that
goes in at the same time.
|
|
recycling B_AGE buffers with dependencies.
>From NetBSD. costa@ ok.
|
|
Just wakeup one process (there is a possible bug here that will be fixed
in the next round of cleanup).
Some misc cleanup, especially in the comments.
|
|
|
|
and getnewbuf. One process can sleep at "getnewbuf" waiting for a free
buffer and it may held buffer 'A' busy. Other processes can return buffers
on free lists, but they sleep on "getblk" waiting for buffer 'A'.
art@ ok.
|
|
From gluk.
|
|
the whole buffer allocation process
|
|
Change VM/UVM to use buf_replacevnode to change the vnode associated
with a buffer.
Addition v_bioflag for flags written in interrupt handlers
(and read at splbio, though not strictly necessary)
Add vwaitforio and use it instead of a while loop of v_numoutput.
Fix race conditions when manipulation vnode free list
|
|
manually twiddling it. This allows the buffer cache to more easily
keep track of dirty buffers and decide when it is appropriate to speed
up the syncer.
Insipired by FreeBSD.
Look over by art@
|