Age | Commit message (Collapse) | Author |
|
DT_PPC_GOT is not used on powerpc64, delete.
|
|
ok kettenis@
|
|
Mostly ported, code runs far enough to start first symbol string lookup.
build with -gdwarf-4 to remove asm warnings.
Do not bother supporting 32 bit non-pic relocations in shared libraries.
(however leave the code there for now)
|
|
They won't work any more due to pledge restrictions so just print
an error and exit if the spool is world-writable. OK beck@
|
|
Initialize "pass" to the empty string instead of NULL, otherwise
crypt_checkpass() will dereference NULL.
From Yuichiro Naito via yasuoka@. OK deraadt@
|
|
While here, use consistent casing and don't use .Ev for
set-user-ID/set-group-ID.
from Miod
|
|
from Miod
|
|
ok kettenis@, deraadt@
|
|
problems as 64-bit models. To resolve the syscall speculation, as a first
step "nop; nop" was added after all occurances of the syscall ("swi 0")
instruction. Then the kernel was changed to jump over the 2 extra instructions.
In this final step, those pair of nops are converted into the speculation-blocking
sequence ("dsb nsh; isb").
Don't try to build through these multiple steps, use a snapshot instead.
Packages matching the new ABI will be out in a while...
ok kettenis
|
|
problems as 64-bit models. For the syscall instruction issue, add nop;nop
after swi 0, in preparation for jumping over a speculation barrier here later.
(a lonely swi 0 was hiding in __asm in this file)
|
|
problems as 64-bit models. For the syscall instruction issue, add nop;nop
after swi 0, in preparation for jumping over a speculation barrier here later.
ok kettenis
|
|
Fixes a "vfprintf %s NULL" warning in ftpd.
OK deraadt@ tb@
|
|
a syscall, replace the double nop with a dsb nsh; isb; sequence which
stops the CPU from speculating any further. This fix was suggested
by Anthony Steinhauser.
ok deraadt@
|
|
Unix MTAs use the exit value of the MDA (here mail.local) to determine
whether or not a failure to deliver mail should be considered to be
a temporary or permanent failure. OK semarie@ beck@
|
|
Starting from "Combined Table of Contents" in Doug McIlroy's
"A Research UNIX Reader" a table of which edition manuals appeared in.
Checked against manuals from bitsavers/TUHS and source from TUHS where
available.
Ingo points out there are cases where something is included but not
documented until a later release.
bcd(6) v6 v7
printf(3) v2 v4
abort(3) v5 v6
system(3) v6 v7
fmod(3) v5 v6
ok schwarze@
|
|
The -H flag was deprecated in 1998. OK jung@
|
|
If mail.local is invoked by a non-root user, open a pipe to
lockspool(1) for file locking. It is only possible to delivery to
a pre-existing mail spool when running mail.local as non-root.
OK gilles@ deraadt@
|
|
The will be replaced by a speculation barrier as soon as we teach the
kernel to skip over these two instructions when returning from a
system call.
ok patrick@, deraadt@
|
|
Also avoid command option injection for ls(1).
OK martijn@
|
|
- use the correct length when checking for "-v lastchance=yes"
- don't try to zero pass if it is NULL
From miod@
|
|
In 2001 login_passwd was made modular so we could use the same
source for passwd and kerberos auth. Now that we no longer have
kerberos integrated we can simplify login_passwd. OK deraadt@
|
|
Prompted by Qualys's leveraging malloc failure in _dl_split_path() to get
stuff past.
ok deraadt@ millert@
|
|
just delete them without looking.
ok millert@
|
|
This bug was introduced in the login_passwd rewrite back in 2001.
From Tom Longshine.
|
|
set-user-ID and set-group-ID executables in low memory conditions.
Reported by Qualys
|
|
unmapped and ld.so tries again with different random address layout.
In this case, use the new libc executable address for msyscall(2),
not one from the first try. Fixes sporadic bogus syscall on i386.
OK deraadt@
|
|
|
|
that was deletes the lazy relocation trampoline which ltrace currently
depends on
problem reported by tb@
directional feedback kettenis@
ok mpi@
|
|
better that folk doing development in here use their own cp tooling.
|
|
|
|
We don't want to CLEANFILES this one. On occasion this comes in useful.
|
|
enforce a new policy: system calls must be in pre-registered regions.
We have discussed more strict checks than this, but none satisfy the
cost/benefit based upon our understanding of attack methods, anyways
let's see what the next iteration looks like.
This is intended to harden (translation: attackers must put extra
effort into attacking) against a mixture of W^X failures and JIT bugs
which allow syscall misinterpretation, especially in environments with
polymorphic-instruction/variable-sized instructions. It fits in a bit
with libc/libcrypto/ld.so random relink on boot and no-restart-at-crash
behaviour, particularily for remote problems. Less effective once on-host
since someone the libraries can be read.
For static-executables the kernel registers the main program's
PIE-mapped exec section valid, as well as the randomly-placed sigtramp
page. For dynamic executables ELF ld.so's exec segment is also
labelled valid; ld.so then has enough information to register libc's
exec section as valid via call-once msyscall(2)
For dynamic binaries, we continue to to permit the main program exec
segment because "go" (and potentially a few other applications) have
embedded system calls in the main program. Hopefully at least go gets
fixed soon.
We declare the concept of embedded syscalls a bad idea for numerous
reasons, as we notice the ecosystem has many of
static-syscall-in-base-binary which are dynamically linked against
libraries which in turn use libc, which contains another set of
syscall stubs. We've been concerned about adding even one additional
syscall entry point... but go's approach tends to double the entry-point
attack surface.
This was started at a nano-hackathon in Bob Beck's basement 2 weeks
ago during a long discussion with mortimer trying to hide from the SSL
scream-conversations, and finished in more comfortable circumstances
next to a wood-stove at Elk Lakes cabin with UVM scream-conversations.
ok guenther kettenis mortimer, lots of feedback from others
conversations about go with jsing tb sthen
|
|
|
|
something's broken on at least i386.
|
|
|
|
|
|
exactly like the ABS{32,64} relocation there.
noted by and ok kettenis@
|
|
that handle a dozen relocation types for each, just have a nice little switch
for the four specific relocations that actually occur.
Besides being smaller and easier to understand, this fixes the COPY
relocation handling to only do one symbol lookup, instead of looking
up the symbol and then immediately looking it up again (with the
correct flags to find the instance it needs).
ok kettenis@
|
|
relocation from _dl_md_reloc() to _dl_md_reloc_all_plt() which has
the minimal code to do it.
Also, avoid division on PLTRELSZ; just use it to offset to the end.
ok kettenis@
|
|
have NONE and REL32_64 relocations w/o symbol.
ok visa@
|
|
in the HAVE_JMPREL case anyway), and reduce #includes to match boot.c
ok visa@
|
|
ok visa@
|
|
ok mpi@ kettenis@
|
|
Strip superfluous parens from return statements while here.
Done programatically with two perl invocations
idea ok kettenis@ drahn@
ok visa@
|
|
part of the review. My fail for forgetting to diff my tree against what
was reviewed
problem noted by deraadt@
|
|
'relative' relocation. Take advantage of that to simplify ld.so's self-reloc
code:
* give the exceptional archs (hppa and mips64) copies of the current boot.c
as boot_md.c
* teach the Makefile to use boot_md.c when present
* reduce boot.c down to the minimum necessary to handle just relative reloc
* teach the Makefile to fail if the built ld.so has other types of relocs
ok visa@ kettenis@
|
|
sections; despite being a RELA arch, ld.so was making assumptions about
the initialization of the targeted location.
Add the relative relocation optimization, handling relocations
covered by the DT_RELACOUNT value in a tight loop.
ok mpi@ deraadt@
|
|
ok mlarkin@, mpi@, krw@, deraadt@
|
|
ok mpi@
|
|
The existing code did a full recursive walk for O(horrible). Instead,
keep a single list of nodes plus the index of the first node whose
children haven't been scanned; lookup until that index catches the
end, appending the unscanned children of the node at the index. This
also makes the grpsym list order match that calculated by FreeBSD and
glibc in dependency trees with inconsistent ordering of dependent libs.
To make this easier and more cache friendly, convert grpsym_list
to a vector: the size is bounded by the number of objects currently
loaded.
Other, related fixes:
* increment the grpsym generation number _after_ pushing the loading
object onto its grpsym list, to avoid double counting it
* increment the grpsym generation number when building the grpsym list
for an already loaded object that's being dlopen()ed, to avoid
incomplete grpsym lists
* use a more accurate test of whether an object already has a grpsym list
Prompted by a diff from Nathanael Rensen (nathanael (at) list.polymorpheus.com)
that pointed to _dl_cache_grpsym_list() as a performance bottleneck.
Much proding from robert@, sthen@, aja@, jca@
no problem reports after being in snaps
ok mpi@
|