Age | Commit message (Collapse) | Author |
|
HTTP proxy (i.e. for fetching resources over https). This is required by
some proxy servers.
Ftom KUWAZAWA Takuya, ok tb@
|
|
A recent update to filezilla showed a server that would refuse to let us
download the distfile without us sending this header. Browsers, curl and
wget do so, so it should be safe for us to follow suit.
ok deraadt florian phessler sthen
|
|
since fetch.c revision 1.211, ftp removes trailingwhitespaces early so
there's no need to re-do that when parsing a header.
while here, remove an unused variable too.
ok tb, millert
|
|
Was overlooked in r1.209.
diff from 'a dog' (OpenBSD [at] anthropomorphic [dot] dog)
ok tb, sthen
|
|
ok miod@ millert@
|
|
amendments to his diff are noted on tech
|
|
untrusted input.
OK tb@ kn@ millert@
|
|
HTTP standard allows for spaces in too many places
OK millert@ tb@
|
|
For hosts with multiple IP addrs this makes it possible to fall
over from an unresponsive IP to another. This also replaces the
other connect(2) + connect_wait() calls with timed_connect() so the
-w option now works for more that just http. OK sthen@ deraadt@
|
|
RFC9112 allows any amount of space/tabs between the ':' and the value.
Until now this code required exactly one space which works most of the
time but is not RFC compliant.
OK djm@
|
|
I overlooked the autoinstall case where "Requesting ..." is used,
but those messages that got fixed where omitted in ftp's SMALL version.
Noticed the hard way by anton
|
|
Encoding URL paths changes the requested URL and therefore may yield
different responses (opposed to an unencoded URL), solely depending on how
the server implements de/encoding.
Always print the encoded URL which actually gets requested in output like
"Requesting ..." and erors likes "Error retrieving ....: 404 Not Found"
and don't use the original URL provided on the command line.
This matches exactly what is seen on the wire, e.g. with tshark(1) and
helps debugging URL de/encoding related (server) issues.
Feedback OK sthen
|
|
RFC 1738 Uniform Resource Locators (URL) lists tilde as unsafe character.
RFC 2396 Uniform Resource Identifiers (URI): Generic Syntax updates it to
The tilde "~" character was added to those in the "unreserved" set,
since it is extensively used on the Internet in spite of the
difficulty to transcribe it with some keyboards.
In theory, this shouldn't make a difference, but some servers do not decode
"%7e" and thus erroneously serve a 404.
RFC 2396 2.4.2. When to Escape and Unescape says:
In some cases, data that could be represented by an unreserved
character may appear escaped; for example, some of the unreserved
"mark" characters are automatically escaped by some systems. If the
given URI scheme defines a canonicalization algorithm, then
unreserved characters may be unescaped according to that algorithm.
For example, "%7e" is sometimes used instead of "~" in an http URL
path, but the two are equivalent for an http URL.
Update ftp(1) to RFC 2396 by no longer treating "~" as unsafe character.
This is effectively a one-character diff; update comments accordingly as
well as the order of characters to ease code-to-standard comparison.
This matches curl(1) and wget(1) behaviour wrt. encoding of "~".
OK sthen
|
|
|
|
ptr++
ok claudio
|
|
ok jca robert
|
|
Reported by bentley@; ok bentley@ jca@
|
|
Replace fparseln(3) with getline(3). This removes the only use of
libutil.a(fparseln.o) from the ramdisk.
Replace a complicated fgetln(3) idiom with the much simpler getline(3).
ok jca@
|
|
fetching over http(s) and use the timestamps from the remote server's
Last-Modified header if available when saving local files
this makes it possible to mirror files better with ftp(1)
the new timestamp behaviour can be disabled with the new '-u' flag
ok sthen@, input from sthen@ and gnezdo@
|
|
ok jca@, kn@
|
|
may modify the string buffer.
improved and ok jca@
|
|
Fetch aborts through SIGINT (^C) print a message with fputs(3), but this
calls malloc() on its own, which is not supported from interrupt handler
context.
Fix it by using write(2) which avoids further memory allocations.
While here, merge abortfile() into the identical aborthttp() with a more
generic "fetch aborted." message for simplicity.
Spotted with vm.malloc_conf=SU and ^C on a port's "make fetch" causing
ftp(49660) in malloc(): recursive call
Abort trap (core dumped)
OK jca (who came up with using write(2) independently)
|
|
Consistently disarm the SIGINT handler on error, else a SIGINT can lead
to taking twice the cleanup path. Initial report by naddy@, ok tb@
|
|
|
|
ok yasuoka@
|
|
Not handling it is incorrect and can lead to credentials leaks in DNS
requests. The resulting growth is reasonable (about 300 bytes on
amd64).
ok yasuoka@
|
|
is work in progress.
|
|
First look for userinfo, and overwrite it to make sure it doesn't
reappears again later.
Then reset the path to fix the fragile mechanism that produces the full
request URI for the proxied connection case.
ok yazuoka@
|
|
https server with user/password through "http_proxy" environment
variable work properly.
ok jca
|
|
- allocate read buffer before setjmp(3) so that its value is properly
defined when longjmp(3) returns
- only mark as volatile variables modified after setjmp(3) and used
again after a possible return from longjmp(3)
|
|
Changes already present in file_get()
- no need to special case write(2) returning 0
- clearer loop condition
- fix read error detection and properly save errno
|
|
correctly. This would break ftp when the handshake doesn't complete in one
shot. (noticed when making tls 1.3 connections to cloudflare.cdn)
ok jsing@
|
|
The code is mostly duplicated already, handling local files here just
makes for more complex code. Split it out to its own function. This
mechanically prevents redirections to local files.
Positive feedback from Hiltjo Posthuma
|
|
Report and fix from Hiltjo Posthuma, input from and ok deraadt@
|
|
On SMALL builds ftp_printf is just a #define to avoid a size increase.
ok millert@
|
|
Input from deraadt@
|
|
Overlooked when shuffling the HTTP/1.1 code.
|
|
from Hiltjo Posthuma
|
|
|
|
|
|
Some sites in ports start to reject HTTP/1.0 requests. Let's move on
and implement HTTP/1.1. Should fit in ramdisks.
ok sthen@ tb@
|
|
Results in better code and a size decrease.
|
|
Set up two wrappers around tls_read/write to be used along with the
not-very-portable funopen(). This kills a bunch of local code, always
a nice thing for an utility which ends up in bsd.rd.
"seems legit" deraadt@, ok kn@
|
|
case it came via a redirect)
some help from jca, discussed with aja
|
|
Keeping it around uses both local and remote resources for no good reason.
ok job@
|
|
As a side effect this shuts down the TLS connection before closing the
underlying socket for redirectionss.
ok job@
|
|
We just bail out if the header is absent or if the server tells us to
wait. Prodding from job@, ok sthen@ deraadt@
|
|
Basic implementation: we just retry once, and make no attempt (yet) to
parse any Retry-After header.
The idea is to work around cdn.openbsd.org sometimes replying with a 503
for reasons unknown. According to juanfra@ it sets "Retry-After: 0" so
this minimal implementation should be enough.
Different diff from espie@, test case from sthen@, input from
millert@, ok millert@ deraadt@
|
|
value < 0. errno is only updated in this case. Change all (most?)
callers of syscalls to follow this better, and let's see if this strictness
helps us in the future.
|
|
We are juggling too many things at the moment and we can't deal with
the differences in behaviour right now.
|