Age | Commit message (Collapse) | Author |
|
ssh tools. The dynamic objects are entirely ret-clean, static binaries
will contain a blend of cleaning and non-cleaning callers.
|
|
placed head of the btext (boot.text) segment. (the boot.text segment is
"unmapped" after initization, as a self-protection mechanism). this meant
the LOAD's virtual addresses were not in sequence, which clearly isn't
what we intended.
|
|
Required for strict-alignment architectures and a good idea on others.
same as kettenis commit to libc
|
|
with {uint offset, uint syscall#} entries in libc & ld.so.
In libc a few syscall# entries (break, sigprocmask, _tfork, _threxit)
are duplicated because additional or inline uses occur (that situation
is handled elsewhere)
ok kettenis
|
|
reproducing the relevant defines and code in a different place) to perform
minor relocations. If things go very wrong, it would call _dl_exit() --
a locally defined crt0 function which is syscall exit(2). We don't need
to call exit(2) for this obscure case which doesn't happen and provides no
debugging information. An 'abort' is going to provide better information.
So let's change the function name to _dso_abort() and make it a single
illegal instruction.
ok guenther
|
|
indirect branch, so include an endbr64 Just In Case.
ok deraadt@
|
|
|
|
|
|
OK deraadt
|
|
is no longer a NOP on those systems, let's do it.
|
|
Since we got rid of padded syscalls we have enough registers to do this.
ok deraadt@ ok kettenis@
|
|
of ld.so boot.text region is now (silently) failing because the region is
contained within the text LOAD, which is immutable. So create a new btext
LOAD with flags PF_X|PF_R|PF_OPENBSD_MUTABLE, and place all boot.text objects
in there. This LOAD must also be page-aligned so it doesn't skip unmapping
some of the object region, previously it was hilariously unaligned.
Similar changes for other architectures coming after more testing.
ok kettenis and guenther seemed to like it also
|
|
relocations.
ok guenther@
|
|
so delete the #includes and hide the RELOC_* functions that are
only used by lib/csu behind "#ifdef RCRT0"
these are the ones I tested; kettenis@ was on board with the concept
|
|
* replace #include "archdep.h" with #includes of what is used, pulling in
"syscall.h", "util.h", and "archdep.h" as needed
* delete #include <sys/syscall.h> from syscall.h
* only pull in <sys/stat.h> to the three files that use _dl_fstat(),
forward declare struct stat in syscall.h for the others
* NBBY is for <sys/select.h> macros; just use '8' in dl_printf.c
* <machine/vmparam.h> is only needed on i386; conditionalize it
* stop using __LDPGSZ: use _MAX_PAGE_SHIFT (already used by malloc.c)
where necessary
* delete other bogus #includes, order legit per style: <sys/*> then
<*/*>, then <*>, then "*"
dir.c improvement from jsg@
ok and testing assistance deraadt@
|
|
Switch libc and ld.so to the generic stubs for these calls.
WARNING: reboot to updated kernel before installing libc or ld.so!
Time for a story...
When gcc (back in 1.x days) first implemented long long, it didn't (always)
pass 64bit arguments in 'aligned' registers/stack slots, with the result that
argument offsets didn't match structure offsets. This affected the nine system
calls that pass off_t arguments:
ftruncate lseek mmap mquery pread preadv pwrite pwritev truncate
To avoid having to do custom ASM wrappers for those, BSD put an explicit pad
argument in so that the off_t argument would always start on a even slot and
thus be naturally aligned. Thus those odd wrappers in lib/libc/sys/ that use
__syscall() and pass an extra '0' argument.
The ABIs for different CPUs eventually settled how things should be passed on
each and gcc 2.x followed them. The only arch now where it helps is landisk,
which needs to skip the last argument register if it would be the first half of
a 64bit argument. So: add new syscalls without the pad argument and on landisk
do that skipping directly in the syscall handler in the kernel. Keep compat
support for the existing syscalls long enough for the transition.
ok deraadt@
|
|
Annotate RELOC_DYN() on non-hppa as only used in lib/csu.
Delete some inconsistent comments, adjust whitespace, and reorder
mips64's archdep.h so that the ld.so/*/archdep.h files look
(almost) the same.
ok visa@ kettenis@
|
|
that was deletes the lazy relocation trampoline which ltrace currently
depends on
problem reported by tb@
directional feedback kettenis@
ok mpi@
|
|
enforce a new policy: system calls must be in pre-registered regions.
We have discussed more strict checks than this, but none satisfy the
cost/benefit based upon our understanding of attack methods, anyways
let's see what the next iteration looks like.
This is intended to harden (translation: attackers must put extra
effort into attacking) against a mixture of W^X failures and JIT bugs
which allow syscall misinterpretation, especially in environments with
polymorphic-instruction/variable-sized instructions. It fits in a bit
with libc/libcrypto/ld.so random relink on boot and no-restart-at-crash
behaviour, particularily for remote problems. Less effective once on-host
since someone the libraries can be read.
For static-executables the kernel registers the main program's
PIE-mapped exec section valid, as well as the randomly-placed sigtramp
page. For dynamic executables ELF ld.so's exec segment is also
labelled valid; ld.so then has enough information to register libc's
exec section as valid via call-once msyscall(2)
For dynamic binaries, we continue to to permit the main program exec
segment because "go" (and potentially a few other applications) have
embedded system calls in the main program. Hopefully at least go gets
fixed soon.
We declare the concept of embedded syscalls a bad idea for numerous
reasons, as we notice the ecosystem has many of
static-syscall-in-base-binary which are dynamically linked against
libraries which in turn use libc, which contains another set of
syscall stubs. We've been concerned about adding even one additional
syscall entry point... but go's approach tends to double the entry-point
attack surface.
This was started at a nano-hackathon in Bob Beck's basement 2 weeks
ago during a long discussion with mortimer trying to hide from the SSL
scream-conversations, and finished in more comfortable circumstances
next to a wood-stove at Elk Lakes cabin with UVM scream-conversations.
ok guenther kettenis mortimer, lots of feedback from others
conversations about go with jsing tb sthen
|
|
something's broken on at least i386.
|
|
that handle a dozen relocation types for each, just have a nice little switch
for the four specific relocations that actually occur.
Besides being smaller and easier to understand, this fixes the COPY
relocation handling to only do one symbol lookup, instead of looking
up the symbol and then immediately looking it up again (with the
correct flags to find the instance it needs).
ok kettenis@
|
|
relocation from _dl_md_reloc() to _dl_md_reloc_all_plt() which has
the minimal code to do it.
Also, avoid division on PLTRELSZ; just use it to offset to the end.
ok kettenis@
|
|
Strip superfluous parens from return statements while here.
Done programatically with two perl invocations
idea ok kettenis@ drahn@
ok visa@
|
|
'relative' relocation. Take advantage of that to simplify ld.so's self-reloc
code:
* give the exceptional archs (hppa and mips64) copies of the current boot.c
as boot_md.c
* teach the Makefile to use boot_md.c when present
* reduce boot.c down to the minimum necessary to handle just relative reloc
* teach the Makefile to fail if the built ld.so has other types of relocs
ok visa@ kettenis@
|
|
ok mlarkin@, mpi@, krw@, deraadt@
|
|
hiding the actual grotty bits in inline functions
ok mpi@
|
|
- the symbol it found, returned via the second argument
- the base offset of the the object it was found in, via the return value
- optionally: the object it was found in, returned via the last argument
Instead, return a struct with the symbol and object pointers and let the
caller get the base offset from the object's obj_base member. On at least
aarch64, amd64, mips64, powerpc, and sparc64, a two word struct like this
is passed in registers.
ok mpi@, kettenis@
|
|
ok kettenis@
|
|
ok deraadt@, kettenis@
|
|
the change in __getcwd(2)'s return value. Fix it by switching to the
__realpath(2) syscall, eliminating the ld.so copy of realpath().
problem caught by regress and noted by bluhm@
ok deraadt@
|
|
- put functions and data which are only used before calling the executable's
start function into their own page-aligned segments for unmapping
(only done on amd64, arm64, armv7, powerpc, and sparc64 so far)
- pass .init_array and .preinit_array functions an addition argument which
is a callback to get a structure which includes a function that frees
the boot text and data
- sometimes delay doing RELRO processing: for a shared-object marked
DF_1_INITFIRST do it after the object's .init_array, for the executable
do it after the .preinit_array
- improve test-ld.so to link against libpthread and trigger its initialization
late
libc changes to use this will come later
ok kettenis@
|
|
__got_{start,end} to find a region to mark read-only. It was only used
for binaries that didn't have a GNU_RELRO segment, but all archs have
been using that for over a year. Since support for insecure-PLT layouts
on powerpc and alpha have been removed, all archs handle GNU_RELRO the
same way and the support can be moved from the MD code to the MI code.
ok mpi@
|
|
marking them const will keep a source change from silently moving them
back to .data
ok deraadt@ kettenis@
|
|
ok kettenis guenther
|
|
stub doesn't preserve them and some may be used for passing arguments
ok kettenis@ deraadt@ mlarkin@
|
|
ok jasper@, jca@, deraadt@
|
|
|
|
which is largely MI.
ok visa kettenis
|
|
(generally associated with hardwired BTC limitations). And then fill
those alignments with 0xcc (int 3) to match our trapsled model. Resulting
binaries show no sequential nop's.
ok mlarkin kettenis mortimer
|
|
simply exiting, via helper functions _dl_die(), _dl_diedie(), and
_dl_oom().
prompted by a complaint from jsing@
ok jsing@ deraadt@
|
|
|
|
Move _dl_mmap() and _dl_mquery() inlines from archdep.h to syscall.h and
remove pointless casts and unnecessary parens.
ok kettenis@
|
|
ok deraadt@
|
|
instead. Result in a few more pages that aren't writable on some platforms
(such as hppa). Based on an initial diff from guenther@.
Thanks to deraadt@ for testing.
ok guenther@
|
|
ok deraadt@
|
|
for our development process.
ok kettenis@ deraadt@
|
|
possible EXEC permission for the section, because the proper permission
is set late, and there are no thread concerns here. Avoids W^X issues
in oddball cases.
ok guenther kettenis
|
|
ok guenther
|
|
This stores errno, the cancelation flags, and related bits for each thread
and is allocated by ld.so or libc.a. This is an ABI break from 5.9-stable!
Make libpthread dlopen'able by moving the cancelation wrappers into libc
and doing locking and fork/errno handling via callbacks that libpthread
registers when it first initializes. 'errno' *must* be declared via
<errno.h> now!
Clean up libpthread's symbol exports like libc.
On powerpc, offset the TIB/TCB/TLS data from the register per the ELF spec.
Testing by various, particularly sthen@ and patrick@
ok kettenis@
|
|
as osendsyslog for a while. The three argument variant is the only
one that will stay.
input kettenis@; OK deraadt@
|