Age | Commit message (Collapse) | Author |
|
Also avoid command option injection for ls(1).
OK martijn@
|
|
- use the correct length when checking for "-v lastchance=yes"
- don't try to zero pass if it is NULL
From miod@
|
|
In 2001 login_passwd was made modular so we could use the same
source for passwd and kerberos auth. Now that we no longer have
kerberos integrated we can simplify login_passwd. OK deraadt@
|
|
Prompted by Qualys's leveraging malloc failure in _dl_split_path() to get
stuff past.
ok deraadt@ millert@
|
|
just delete them without looking.
ok millert@
|
|
This bug was introduced in the login_passwd rewrite back in 2001.
From Tom Longshine.
|
|
set-user-ID and set-group-ID executables in low memory conditions.
Reported by Qualys
|
|
unmapped and ld.so tries again with different random address layout.
In this case, use the new libc executable address for msyscall(2),
not one from the first try. Fixes sporadic bogus syscall on i386.
OK deraadt@
|
|
|
|
that was deletes the lazy relocation trampoline which ltrace currently
depends on
problem reported by tb@
directional feedback kettenis@
ok mpi@
|
|
better that folk doing development in here use their own cp tooling.
|
|
|
|
We don't want to CLEANFILES this one. On occasion this comes in useful.
|
|
enforce a new policy: system calls must be in pre-registered regions.
We have discussed more strict checks than this, but none satisfy the
cost/benefit based upon our understanding of attack methods, anyways
let's see what the next iteration looks like.
This is intended to harden (translation: attackers must put extra
effort into attacking) against a mixture of W^X failures and JIT bugs
which allow syscall misinterpretation, especially in environments with
polymorphic-instruction/variable-sized instructions. It fits in a bit
with libc/libcrypto/ld.so random relink on boot and no-restart-at-crash
behaviour, particularily for remote problems. Less effective once on-host
since someone the libraries can be read.
For static-executables the kernel registers the main program's
PIE-mapped exec section valid, as well as the randomly-placed sigtramp
page. For dynamic executables ELF ld.so's exec segment is also
labelled valid; ld.so then has enough information to register libc's
exec section as valid via call-once msyscall(2)
For dynamic binaries, we continue to to permit the main program exec
segment because "go" (and potentially a few other applications) have
embedded system calls in the main program. Hopefully at least go gets
fixed soon.
We declare the concept of embedded syscalls a bad idea for numerous
reasons, as we notice the ecosystem has many of
static-syscall-in-base-binary which are dynamically linked against
libraries which in turn use libc, which contains another set of
syscall stubs. We've been concerned about adding even one additional
syscall entry point... but go's approach tends to double the entry-point
attack surface.
This was started at a nano-hackathon in Bob Beck's basement 2 weeks
ago during a long discussion with mortimer trying to hide from the SSL
scream-conversations, and finished in more comfortable circumstances
next to a wood-stove at Elk Lakes cabin with UVM scream-conversations.
ok guenther kettenis mortimer, lots of feedback from others
conversations about go with jsing tb sthen
|
|
|
|
something's broken on at least i386.
|
|
|
|
|
|
exactly like the ABS{32,64} relocation there.
noted by and ok kettenis@
|
|
that handle a dozen relocation types for each, just have a nice little switch
for the four specific relocations that actually occur.
Besides being smaller and easier to understand, this fixes the COPY
relocation handling to only do one symbol lookup, instead of looking
up the symbol and then immediately looking it up again (with the
correct flags to find the instance it needs).
ok kettenis@
|
|
relocation from _dl_md_reloc() to _dl_md_reloc_all_plt() which has
the minimal code to do it.
Also, avoid division on PLTRELSZ; just use it to offset to the end.
ok kettenis@
|
|
have NONE and REL32_64 relocations w/o symbol.
ok visa@
|
|
in the HAVE_JMPREL case anyway), and reduce #includes to match boot.c
ok visa@
|
|
ok visa@
|
|
ok mpi@ kettenis@
|
|
Strip superfluous parens from return statements while here.
Done programatically with two perl invocations
idea ok kettenis@ drahn@
ok visa@
|
|
part of the review. My fail for forgetting to diff my tree against what
was reviewed
problem noted by deraadt@
|
|
'relative' relocation. Take advantage of that to simplify ld.so's self-reloc
code:
* give the exceptional archs (hppa and mips64) copies of the current boot.c
as boot_md.c
* teach the Makefile to use boot_md.c when present
* reduce boot.c down to the minimum necessary to handle just relative reloc
* teach the Makefile to fail if the built ld.so has other types of relocs
ok visa@ kettenis@
|
|
sections; despite being a RELA arch, ld.so was making assumptions about
the initialization of the targeted location.
Add the relative relocation optimization, handling relocations
covered by the DT_RELACOUNT value in a tight loop.
ok mpi@ deraadt@
|
|
ok mlarkin@, mpi@, krw@, deraadt@
|
|
ok mpi@
|
|
The existing code did a full recursive walk for O(horrible). Instead,
keep a single list of nodes plus the index of the first node whose
children haven't been scanned; lookup until that index catches the
end, appending the unscanned children of the node at the index. This
also makes the grpsym list order match that calculated by FreeBSD and
glibc in dependency trees with inconsistent ordering of dependent libs.
To make this easier and more cache friendly, convert grpsym_list
to a vector: the size is bounded by the number of objects currently
loaded.
Other, related fixes:
* increment the grpsym generation number _after_ pushing the loading
object onto its grpsym list, to avoid double counting it
* increment the grpsym generation number when building the grpsym list
for an already loaded object that's being dlopen()ed, to avoid
incomplete grpsym lists
* use a more accurate test of whether an object already has a grpsym list
Prompted by a diff from Nathanael Rensen (nathanael (at) list.polymorpheus.com)
that pointed to _dl_cache_grpsym_list() as a performance bottleneck.
Much proding from robert@, sthen@, aja@, jca@
no problem reports after being in snaps
ok mpi@
|
|
_dl_free(), which would trigger a "recursive call" assertion...if we
had ever realloced in ld.so
ok deraadt@
|
|
X11R6).
Suggested by tb@
ok deraadt@ tb@ millert@
|
|
ok visa@ guenther@
|
|
ok guenther@
|
|
|
|
|
|
|
|
be used to effectively remove filesystem access.
That being said, in spamd(8) when I pledge(2)d it the main priv process got
"stdio inet" which means there's no fs access at all so calling
chroot(2)/chdir(2) here doesn't get us any additional protection. Just remove
them.
OK deraadt@ and no objections from schwarze@
|
|
hiding the actual grotty bits in inline functions
ok mpi@
|
|
- the symbol it found, returned via the second argument
- the base offset of the the object it was found in, via the return value
- optionally: the object it was found in, returned via the last argument
Instead, return a struct with the symbol and object pointers and let the
caller get the base offset from the object's obj_base member. On at least
aarch64, amd64, mips64, powerpc, and sparc64, a two word struct like this
is passed in registers.
ok mpi@, kettenis@
|
|
ok kettenis@
|
|
ok deraadt@, kettenis@
|
|
|
|
|
|
Add an internal version of pcap_open_live that ensures bpf(4) devices
are opened read-only before locking. Neither pflogd(8) or spamlogd(8)
require write access to bpf(4). Inspired by similar solution in
OpenBSD tcpdump(8).
pflogd(8) was safe since being unveiled last year, but spamlogd(8)
was having /dev/bpf opened O_RDWR.
Issue discovered by bluhm@'s unveil(2) accounting commit.
ok deraadt@, mestre@ (thanks for testing spamlogd!)
|
|
in default, cannot get anywhere near the filesystem since its only promises are
"stdio inet". Furthermore, in blacklist mode this same codepath is not
chroot'ed but once again it gets the same pledge(2).
Therefore we can remove the BUGS section from spamd(8)'s manpage.
OK millert@ deraadt@
|
|
In 2013, I implemented the single-entry LRU cache that gets the maximal
symbol reuse from combreloc. Since then, the ld.so generic relocation
symcache has been a waste of CPU and memory with 0% hit-rate, so kill it.
ok mpi@
|
|
the change in __getcwd(2)'s return value. Fix it by switching to the
__realpath(2) syscall, eliminating the ld.so copy of realpath().
problem caught by regress and noted by bluhm@
ok deraadt@
|