diff options
author | Artur Grabowski <art@cvs.openbsd.org> | 2008-09-23 13:25:47 +0000 |
---|---|---|
committer | Artur Grabowski <art@cvs.openbsd.org> | 2008-09-23 13:25:47 +0000 |
commit | 1af0a7bea6846dfa68af14a0710a60f8255af046 (patch) | |
tree | 72cf6edcc97bc0c402c031822cdc11f998b1c517 /sys/uvm | |
parent | 82142379b0686db75b95ee2849865cadba309949 (diff) |
Do not merge userland map entries.
Imagine lots of random small mappings (think malloc(3)) and sometimes
one large mapping (network buffer). If we've filled up our address space
enough, the random address picked for the large allocation is likely to
be overlapping an existing small allocation, so we'll do a linear scan
to find the next free address. That next free address is likely to
be just after a small allocation. Those two map entires get merged.
If we now allocate an amap for the merged map entry, it will be large.
When we later free the large allocation the amap is not truncated. All
these are design decisions that made sense for sbrk, but with random
allocations and malloc that actually returns memory, this really hurt us.
This is the reason why certain processes like apache and sendmail could
eat more than 10 times as much amap memory as they needed, eventually
hitting the malloc limit and hanging or running the machine out of
kmem_map and crashing.
otto@ ok
Diffstat (limited to 'sys/uvm')
-rw-r--r-- | sys/uvm/uvm_map.c | 13 |
1 files changed, 12 insertions, 1 deletions
diff --git a/sys/uvm/uvm_map.c b/sys/uvm/uvm_map.c index e9f7e2ad21a..14a2b536039 100644 --- a/sys/uvm/uvm_map.c +++ b/sys/uvm/uvm_map.c @@ -1,4 +1,4 @@ -/* $OpenBSD: uvm_map.c,v 1.103 2008/07/25 12:05:04 art Exp $ */ +/* $OpenBSD: uvm_map.c,v 1.104 2008/09/23 13:25:46 art Exp $ */ /* $NetBSD: uvm_map.c,v 1.86 2000/11/27 08:40:03 chs Exp $ */ /* @@ -98,6 +98,7 @@ static struct timeval uvm_kmapent_last_warn_time; static struct timeval uvm_kmapent_warn_rate = { 10, 0 }; struct uvm_cnt uvm_map_call, map_backmerge, map_forwmerge; +struct uvm_cnt map_nousermerge; struct uvm_cnt uvm_mlk_call, uvm_mlk_hint; const char vmmapbsy[] = "vmmapbsy"; @@ -538,6 +539,7 @@ uvm_map_init(void) UVMCNT_INIT(map_backmerge, UVMCNT_CNT, 0, "# uvm_map() back merges", 0); UVMCNT_INIT(map_forwmerge, UVMCNT_CNT, 0, "# uvm_map() missed forward", 0); + UVMCNT_INIT(map_nousermerge, UVMCNT_CNT, 0, "# back merges skipped", 0); UVMCNT_INIT(uvm_mlk_call, UVMCNT_CNT, 0, "# map lookup calls", 0); UVMCNT_INIT(uvm_mlk_hint, UVMCNT_CNT, 0, "# map lookup hint hits", 0); @@ -834,6 +836,15 @@ uvm_map_p(struct vm_map *map, vaddr_t *startp, vsize_t size, goto step3; } + /* + * Only merge kernel mappings, but keep track + * of how much we skipped. + */ + if (map != kernel_map && map != kmem_map) { + UVMCNT_INCR(map_nousermerge); + goto step3; + } + if (prev_entry->aref.ar_amap) { error = amap_extend(prev_entry, size); if (error) { |