Age | Commit message (Collapse) | Author |
|
from weerd
|
|
from lucas de sena
ok espie
|
|
__builtin_return_address(a) with a != 0.
|
|
ok deraadt@
|
|
exactly this use case where the new memory needs to be zeroed during resize.
Since recallocarray() takes care of all this there is no need to bzero()
memory anymore.
OK tb@ millert@
|
|
unfortunately gcc3 does not have __builtin_clz().
ok miod@ otto@
|
|
On free, chunks (the pieces of a pages used for smaller allocations)
are junked and then validated after they leave the delayed free
list. So after free, a chunk always contains junk bytes. This means
that if we start with the right contents for a new page of chunks,
we can *validate* instead of *write* junk bytes when (re)-using a
chunk.
With this, we can detect write-after-free when a chunk is recycled,
not justy when a chunk is in the delayed free list. We do a little
bit more work on initial allocation of a page of chunks and when
re-using (as we validate now even on junk level 1).
Also: some extra consistency checks for recallocaray(3) and fixes
in error messages to make them more consistent, with man page bits.
Plus regress additions.
|
|
Currently, pledged '-pg' binaries get killed in _mcleanup() when they
try to disable profil(2) via moncontrol(3).
Disabling profil(2) is harmless. Add profil(2) to the "stdio"
pledge(2) promise and permit profil(2) calls when the scale argument
is zero. Enabling profil(2) remains forbidden in pledged processes.
This gets us one step closer to making '-pg' binaries compatible with
pledge(2). The next step is to decide how to exfiltrate the profiling
data from the process during _mcleanup().
Prompted by semarie@. Cleaned up by deraadt@. With input from
deraadt@, espie@, and semarie@.
"Looks good" deraadt@
pledge(2) pieces ok semarie@
|
|
ok guenther@
|
|
non-trivial new information or code-paths over wait4(), include
it in pledge("stdio")
discussed with deraadt@
|
|
with an ENTRY(), so it needs its own endbr64 for IBT
ok deraadt@
|
|
future, inadvertant PLT entries. Move the __getcwd and __realpath
declarations to hidden/{stdlib,unistd}.h to consolidate and remove
duplication.
ok tb@ otto@ deraadt@
|
|
so that the internal call can't be interposed over by the app.
ok tb@ otto@ deraadt@
|
|
* with IBT, it can't return via an indirect jump as that would
require the *caller* to have an endbr64
* to support a potential vmspace-sharing implementation, keep the
retguard value in an arg register across the underlying syscall
ok kettenis@ deraadt@
|
|
ok jan bluhm
|
|
Improve markup while here.
Feedback tb jmc
OK millert
|
|
that the removal of the off_t padding, amd64 syscalls no longer passed a 7th
or later argument. We overlooked that syscall(2) bumps the arg count by one,
so six argument calls like SYS_sysctl still pass an argument on the stack.
So, repush the 7th argument so it's at the expected stack offset after the
retguard register is pushed.
problem reported and ok bluhm@
|
|
unlock-lock dance it serves no real purpose any more. Confirmed
by a small performance increase in tests. ok @tb
|
|
This simplifies syzkaller revival after the removal of __syscall.
OK bluhm, millert, deraadt
|
|
ok otto@
|
|
ok tb@
|
|
(sorry, otto, for not spotting in the updated diff)
|
|
except for bootblocks. This way we have built-in leak detecction
always (if enable by malloc flags). See man pages for details.
|
|
|
|
Should catch more of them and closer (in time) to the WAF. ok tb@
|
|
<machine/asm.h> they are already get the necessary "bti c" instructions.
Passi the -mmark-bti-property option to mark the corresponding object
files as having BTI support.
ok deraadt@
|
|
|
|
The basic idea is simple: one of the reasons the recent sshd bug
is potentially exploitable is that a (erroneously) freed malloc
chunk gets re-used in a different role. malloc has power of two
chunk sizes and so one page of chunks holds many different types
of allocations. Userland malloc has no knowledge of types, we only
know about sizes. So I changed that to use finer-grained chunk
sizes.
This has some performance impact as we need to allocate chunk pages
in more cases. Gain it back by allocation chunk_info pages in a
bundle, and use less buckets is !malloc option S. The chunk sizes
used are 16, 32, 48, 64, 80, 96, 112, 128, 160, 192, 224, 256, 320,
384, 448, 512, 640, 768, 896, 1024, 1280, 1536, 1792, 2048 (and a
few more for sparc64 with its 8k sized pages and loongson with its
16k pages).
If malloc option S (or rather cache size 0) is used we use strict
multiple of 16 sized chunks, to get as many buckets as possible.
ssh(d) enabled malloc option S, in general security sensitive
programs should.
See the find_bucket() and bin_of() functions. Thanks to Tony Finch
for pointing me to code to compute nice bucket sizes.
ok tb@
|
|
Originally from djm@. OK deraadt@ florian@ bluhm@
|
|
Based on a patch from enh@google. OK tb@
|
|
|
|
and store it in a const variable for use by crt0.
help from kettenis and miod
|
|
freeing; ok tb@
|
|
future. The ports team is already running around with axes and mops,
but don't worry such an action won't happen quickly.
with tb
|
|
ok guenther
|
|
|
|
an implementation detail for the kernel, libc, and libkvm,
and should not be a concern for others.
|
|
passed in a specific call.
From discussion with schwarze@ and jmc@
ok jmc@
|
|
From discussion with schwarze@ and jmc@
ok jmc@
|
|
|
|
wrong address to the kernel. disable for now.
|
|
|
|
|
|
text more generic
|
|
tell the kernel where the execve stub is found. With this mechanism
we cannot tell the size, so use 128 as an estimate for the most we expect
from any architecture.
discussed with kettenis, ok guenther
|
|
ok guenther
|
|
|
|
|
|
|
|
|