summaryrefslogtreecommitdiff
path: root/libexec/ld.so
AgeCommit message (Collapse)Author
2020-07-18Use the same names as the 64-bit PowerPC ELF ABI for the relocations.Mark Kettenis
2020-07-16Rewrite loop to match what is written down in the ABI document.Mark Kettenis
ok drahn@
2020-07-16Make lazy binding work.Mark Kettenis
Committing on behalf of drahn@ who is a bit busy.
2020-06-28Disable powerpc64 lazy binding, code was not for 64 bit ABIDale Rahn
DT_PPC_GOT is not used on powerpc64, delete.
2020-06-28Powerpc64 ld.so asm code needs to conform to Powerpc64 abi, not 32bit.Dale Rahn
ok kettenis@
2020-06-25PowerPC64 ld.so code.Dale Rahn
Mostly ported, code runs far enough to start first symbol string lookup. build with -gdwarf-4 to remove asm warnings. Do not bother supporting 32 bit non-pic relocations in shared libraries. (however leave the code there for now)
2020-05-08ld.so(1) also ignores LD_LIBRARY_PATH an friends for set-group-ID executablesJeremie Courreges-Anglas
While here, use consistent casing and don't use .Ev for set-user-ID/set-group-ID. from Miod
2020-05-08LD_DEBUG is ignored for set-user-ID and set-group-ID executablesJeremie Courreges-Anglas
from Miod
2020-03-27Add missing space in stack smash handler error message.Matthieu Herrb
ok kettenis@, deraadt@
2020-03-13Anthony Steinhauser reports that 32-bit arm cpus have the same speculationTheo de Raadt
problems as 64-bit models. To resolve the syscall speculation, as a first step "nop; nop" was added after all occurances of the syscall ("swi 0") instruction. Then the kernel was changed to jump over the 2 extra instructions. In this final step, those pair of nops are converted into the speculation-blocking sequence ("dsb nsh; isb"). Don't try to build through these multiple steps, use a snapshot instead. Packages matching the new ABI will be out in a while... ok kettenis
2020-03-13Anthony Steinhauser reports that 32-bit arm cpus have the same speculationTheo de Raadt
problems as 64-bit models. For the syscall instruction issue, add nop;nop after swi 0, in preparation for jumping over a speculation barrier here later. (a lonely swi 0 was hiding in __asm in this file)
2020-03-11Anthony Steinhauser reports that 32-bit arm cpus have the same speculationTheo de Raadt
problems as 64-bit models. For the syscall instruction issue, add nop;nop after swi 0, in preparation for jumping over a speculation barrier here later. ok kettenis
2020-02-18Now that the kernel skips the two instructions immediately followingMark Kettenis
a syscall, replace the double nop with a dsb nsh; isb; sequence which stops the CPU from speculating any further. This fix was suggested by Anthony Steinhauser. ok deraadt@
2020-01-26Insert two nop instructions after each svc #0 instruction in userland.Mark Kettenis
The will be replaced by a speculation barrier as soon as we teach the kernel to skip over these two instructions when returning from a system call. ok patrick@, deraadt@
2019-12-17Eliminate failure returns from _dl_split_path(): if malloc fails just _dl_oom()Philip Guenther
Prompted by Qualys's leveraging malloc failure in _dl_split_path() to get stuff past. ok deraadt@ millert@
2019-12-17Don't look up env variables until we know we'll trust them. Otherwise,Philip Guenther
just delete them without looking. ok millert@
2019-12-11ld.so may fail to remove the LD_LIBRARY_PATH environment variable forTodd C. Miller
set-user-ID and set-group-ID executables in low memory conditions. Reported by Qualys
2019-12-09When loading a library, mmap(2) may fail. Then everything getsAlexander Bluhm
unmapped and ld.so tries again with different random address layout. In this case, use the new libc executable address for msyscall(2), not one from the first try. Fixes sporadic bogus syscall on i386. OK deraadt@
2019-12-09print addresses upon msyscall failure, for nowTheo de Raadt
2019-12-07Disable ltrace for objects linked with -znow, as at least on amd64, linkingPhilip Guenther
that was deletes the lazy relocation trampoline which ltrace currently depends on problem reported by tb@ directional feedback kettenis@ ok mpi@
2019-12-02It is not always clear what ld.so was backed up to ld.so.backup, andTheo de Raadt
better that folk doing development in here use their own cp tooling.
2019-11-30Sigh, fix i386 msyscall() case to permission the correct address range.Theo de Raadt
2019-11-29As additional paranoia, make a copy of system ld.so into obj/ld.so.backupTheo de Raadt
We don't want to CLEANFILES this one. On occasion this comes in useful.
2019-11-29Repurpose the "syscalls must be on a writeable page" mechanism toTheo de Raadt
enforce a new policy: system calls must be in pre-registered regions. We have discussed more strict checks than this, but none satisfy the cost/benefit based upon our understanding of attack methods, anyways let's see what the next iteration looks like. This is intended to harden (translation: attackers must put extra effort into attacking) against a mixture of W^X failures and JIT bugs which allow syscall misinterpretation, especially in environments with polymorphic-instruction/variable-sized instructions. It fits in a bit with libc/libcrypto/ld.so random relink on boot and no-restart-at-crash behaviour, particularily for remote problems. Less effective once on-host since someone the libraries can be read. For static-executables the kernel registers the main program's PIE-mapped exec section valid, as well as the randomly-placed sigtramp page. For dynamic executables ELF ld.so's exec segment is also labelled valid; ld.so then has enough information to register libc's exec section as valid via call-once msyscall(2) For dynamic binaries, we continue to to permit the main program exec segment because "go" (and potentially a few other applications) have embedded system calls in the main program. Hopefully at least go gets fixed soon. We declare the concept of embedded syscalls a bad idea for numerous reasons, as we notice the ecosystem has many of static-syscall-in-base-binary which are dynamically linked against libraries which in turn use libc, which contains another set of syscall stubs. We've been concerned about adding even one additional syscall entry point... but go's approach tends to double the entry-point attack surface. This was started at a nano-hackathon in Bob Beck's basement 2 weeks ago during a long discussion with mortimer trying to hide from the SSL scream-conversations, and finished in more comfortable circumstances next to a wood-stove at Elk Lakes cabin with UVM scream-conversations. ok guenther kettenis mortimer, lots of feedback from others conversations about go with jsing tb sthen
2019-11-28Unrevert: this change was unrelatedPhilip Guenther
2019-11-28Revert yesterday's _dl_md_reloc() and _dl_md_reloc_got() changes:Philip Guenther
something's broken on at least i386.
2019-11-27Delete now obsolete commentsPhilip Guenther
2019-11-27unifdef: hppa does HAVE_JMPREL and does not have DT_PROCNUMPhilip Guenther
2019-11-27armv7 and aarch64 specify GLOB_DAT as having an addend, so treat itPhilip Guenther
exactly like the ABS{32,64} relocation there. noted by and ok kettenis@
2019-11-26Clean up _dl_md_reloc(): instead of having tables and piles of conditionalsPhilip Guenther
that handle a dozen relocation types for each, just have a nice little switch for the four specific relocations that actually occur. Besides being smaller and easier to understand, this fixes the COPY relocation handling to only do one symbol lookup, instead of looking up the symbol and then immediately looking it up again (with the correct flags to find the instance it needs). ok kettenis@
2019-11-26Make aarch64, amd64, arm, and i386 more like sparc64: move non-lazyPhilip Guenther
relocation from _dl_md_reloc() to _dl_md_reloc_all_plt() which has the minimal code to do it. Also, avoid division on PLTRELSZ; just use it to offset to the end. ok kettenis@
2019-11-10Simplify the handling of the explicit relocations based on ld.so onlyPhilip Guenther
have NONE and REL32_64 relocations w/o symbol. ok visa@
2019-11-10unifdef HAVE_JMPREL, delete dt_pltrelsz handling (which was only usedPhilip Guenther
in the HAVE_JMPREL case anyway), and reduce #includes to match boot.c ok visa@
2019-11-10Recommit CHECK_LDSO bits for mips64, verified on both loongson and octeon.Philip Guenther
ok visa@
2019-10-24Delete unused support for relocations that don't require alignment.Philip Guenther
ok mpi@ kettenis@
2019-10-23Prefer the size-independent ELF identifiers over the size-specific ones.Philip Guenther
Strip superfluous parens from return statements while here. Done programatically with two perl invocations idea ok kettenis@ drahn@ ok visa@
2019-10-21Whoops: backout mips64+hppa CHECK_LDSO bits: they weren't done and weren'tPhilip Guenther
part of the review. My fail for forgetting to diff my tree against what was reviewed problem noted by deraadt@
2019-10-20For more archs, ld.so itself only needs/uses the arch's "just add load offset"Philip Guenther
'relative' relocation. Take advantage of that to simplify ld.so's self-reloc code: * give the exceptional archs (hppa and mips64) copies of the current boot.c as boot_md.c * teach the Makefile to use boot_md.c when present * reduce boot.c down to the minimum necessary to handle just relative reloc * teach the Makefile to fail if the built ld.so has other types of relocs ok visa@ kettenis@
2019-10-05Tighten handling of pure relative DIR32 relocations and those referencingPhilip Guenther
sections; despite being a RELA arch, ld.so was making assumptions about the initialization of the targeted location. Add the relative relocation optimization, handling relocations covered by the DT_RELACOUNT value in a tight loop. ok mpi@ deraadt@
2019-10-05Delete some obsolete debugging #ifdefs blocksPhilip Guenther
ok mlarkin@, mpi@, krw@, deraadt@
2019-10-04Convert the child_list member from a linked list to a vector.Philip Guenther
ok mpi@
2019-10-03Use a better algorithm for calculating the grpsym library order.Philip Guenther
The existing code did a full recursive walk for O(horrible). Instead, keep a single list of nodes plus the index of the first node whose children haven't been scanned; lookup until that index catches the end, appending the unscanned children of the node at the index. This also makes the grpsym list order match that calculated by FreeBSD and glibc in dependency trees with inconsistent ordering of dependent libs. To make this easier and more cache friendly, convert grpsym_list to a vector: the size is bounded by the number of objects currently loaded. Other, related fixes: * increment the grpsym generation number _after_ pushing the loading object onto its grpsym list, to avoid double counting it * increment the grpsym generation number when building the grpsym list for an already loaded object that's being dlopen()ed, to avoid incomplete grpsym lists * use a more accurate test of whether an object already has a grpsym list Prompted by a diff from Nathanael Rensen (nathanael (at) list.polymorpheus.com) that pointed to _dl_cache_grpsym_list() as a performance bottleneck. Much proding from robert@, sthen@, aja@, jca@ no problem reports after being in snaps ok mpi@
2019-09-30Oops: the call to ofree() in orealloc() was misconverted into a call toPhilip Guenther
_dl_free(), which would trigger a "recursive call" assertion...if we had ever realloced in ld.so ok deraadt@
2019-09-02Remove some duplicate symbol definitions.mortimer
ok visa@ guenther@
2019-08-31Delete the last argument to fit recent _dl_find_symbol change.Kenji Aoyama
ok guenther@
2019-08-06Factor out TEXTREL mprotecting from the per-arch files into _dl_rtld(),Philip Guenther
hiding the actual grotty bits in inline functions ok mpi@
2019-08-04Simplify _dl_find_symbol(). Currently, it returns three values:Philip Guenther
- the symbol it found, returned via the second argument - the base offset of the the object it was found in, via the return value - optionally: the object it was found in, returned via the last argument Instead, return a struct with the symbol and object pointers and let the caller get the base offset from the object's obj_base member. On at least aarch64, amd64, mips64, powerpc, and sparc64, a two word struct like this is passed in registers. ok mpi@, kettenis@
2019-08-03The ABI says JUMP_SLOT relocations don't have an addend, so don't add it inPhilip Guenther
ok kettenis@
2019-08-03Suppress DWARF2 warnings on clang archs by building with -gdwarf-4Philip Guenther
ok deraadt@, kettenis@
2019-07-21In 2004, we upgraded to binutils 2.14 with did -zcombreloc by default.Philip Guenther
In 2013, I implemented the single-entry LRU cache that gets the maximal symbol reuse from combreloc. Since then, the ld.so generic relocation symcache has been a waste of CPU and memory with 0% hit-rate, so kill it. ok mpi@