Age | Commit message (Collapse) | Author |
|
over past week. as a bonus, kills 5 XXXs.
|
|
one case fixed here).
miod@ "appears to be harmless"
markus@ ok
|
|
|
|
everyone for the prompt review and ok of this work ;-) Yeah, that includes me
too, or maybe especially me. I am sorry.
Change the sched_lock to a mutex. This fixes, among other things, the infamous
"telnet localhost &" problem. The real bug in that case was that the sched_lock
which is by design a non-recursive lock, was recursively acquired, and not
enough releases made us hold the lock in the idle loop, blocking scheduling
on the other processors. Some of the other processors would hold the biglock though,
which made it impossible for cpu 0 to enter the kernel... A nice deadlock.
Let me just say debugging this for days just to realize that it was all fixed
in an old diff noone ever ok'd was somewhat of an anti-climax.
This diff also changes splsched to be correct for all our architectures.
|
|
pages a process uses. this is now the userland "data size" value.
ok art deraadt tdeval. thanks testers.
|
|
|
|
|
|
|
|
who have machines that hit swap a lot. decided after survey of developers,
we found that most turned this on. ok various
|
|
other archs for now, beck theo ok
|
|
everybody please update your trees and test this, we need to find out
wether there is bad side-effects from the doubling. If this does not get
enough testing by our user community we will play safe and revert this for
the 3.7 release, so please test.
|
|
case. Do not arbitarily disallow sizes with the high bit set, they
are unsigned. With lotsa help from miod@, test by danh@
ok miod@ millert@ tedu@
|
|
scenarios, instead generating an ENOMEM backfeed, ok tedu@, prodded by many
|
|
no change in compiler assembly output.
|
|
ok deraadt
|
|
|
|
what i intended all along, without contrived arithmetic screw up.
from discussions with mickey and deraadt
|
|
|
|
on all other architectures. remove last architecture dependent #ifdef from
uvm code.
|
|
|
|
Remove inline from a few functions, shrink the kernel by a few kB and
make things faster. A simple compilation on amd64 spends around 5%
less time in kernel.
Yes, it's faster without inlines, now go buy a book about modern cpu
architectures and find a chapter about the new and revolutionary thing
called "cache".
deraadt@ ok
|
|
|
|
things such that code that only need a second-resolution uptime or wall
time, and used to get that from time.tv_secs or mono_time.tv_secs now get
this from separate time_t globals time_second and time_uptime.
ok art@ niklas@ nordin@
|
|
|
|
break out uvm_km_page bits for this case, no thread here
lots of testing tech@, deraadt@, naddy@, mickey@, ...
|
|
|
|
|
|
prevents msync/madvise funniness
from art@ ok deraadt@
|
|
change both the nointr and default pool allocators to using uvm_km_getpage.
change pools to default to a maxpages value of 8, so they hoard less memory.
change mbuf pools to use default pool allocator.
pools are now more efficient, use less of kmem_map, and a bit faster.
tested mcbride, deraadt, pedro, drahn, miod to work everywhere
|
|
|
|
with a static page size on platforms where it may vary.
ok deraadt@ millert@ tdeval@
|
|
solves problems encountered by david@ and dtucker@ (pr3758)
|
|
|
|
tested by jmc, brad, hshoexer
|
|
an interrupt safe thread.
use this as the new backend for mbpool and mclpool, eliminating the mb_map.
introduce a sysctl kern.maxclusters which controls the limit of clusters
allocated.
testing by many people, works everywhere but m68k. ok deraadt@
this essentially deprecates the NMBCLUSTERS option, don't use it.
this should reduce pressure on the kmem_map and the uvm reserve of static
map entries.
|
|
all architectures but arm, where it is needed.
|
|
|
|
- rijndael_set_key_enc_only() sets up context for encryption only
- rijndael_set_key() always sets up full context
- rijndaelKeySetupDec() gets back original protoype
- uvm: use _enc_only() interface
with hshoexer@, ok deraadt@
|
|
|
|
<zejames@greyhats.org>; tedu@ ok
|
|
others callers do
|
|
ok millert@
|
|
ok millert@ henning@ markus@ drahn@
|
|
we're looking for. change small page_header hash table to a splay tree.
from Chuck Silvers.
tested by brad grange henning mcbride naddy otto
|
|
|
|
Tested by mickey@, henning@, ericj@, and beck@.
ok mickey@
|
|
to prevent fragmentation.
this has the effect of randomizing unhinted mmap()s, sysV mem, and
position of ld.so.
tested on many archs by many developers for quite some time.
use of MIN to allow m68k to play from miod@.
vax is not included.
ok deraadt@ miod@
|
|
i386 exec mappings are still random. detected by pvalchev@. ok deraadt@
|
|
scattering ld.so and libraries around, although all mmaps will also
have some jitter too. better version after some discussion with drahn
testing/ok deraadt henning marcm otto pb
|
|
from Patrick Latifi <patrick.l@hermes.usherb.ca>
ok jason@ tedu@
|