Based on #14135, implement TLSv1.3 earlydata support for the curl
command line, libcurl and its implementation in GnuTLS.
If a known TLS session announces early data support, and the feature is
enabled *and* it is not a "connect-only" transfer, delay the TLS
handshake until the first request is being sent.
- Add --tls-earldata as new boolean command line option for curl.
- Add CURLSSLOPT_EARLYDATA to libcurl to enable use of the feature.
- Add CURLINFO_EARLYDATA_SENT_T to libcurl, reporting the amount of
bytes sent and accepted/rejected by the server.
Implementation details:
- store the ALPN protocol selected at the SSL session.
- When reusing the session and enabling earlydata, use exactly
that ALPN protocol for negoptiation with the server. When the
sessions ALPN does not match the connections ALPN, earlydata
will not be enabled.
- Check that the server selected the correct ALPN protocol for
an earlydata connect. If the server does not confirm or reports
something different, the connect fails.
- HTTP/2: delay sending the initial SETTINGS frames during connect,
if not connect-only.
Verification:
- add test_02_32 to verify earlydata GET with nghttpx.
- add test_07_70 to verify earlydata PUT with nghttpx.
- add support in 'hx-download', 'hx-upload' clients for the feature
Assisted-by: ad-chaos on github
Closes#15211
- multi.c: when ratelimiting a transfer stops (MSTATE_RATELIMITING ->
MSTATE_PERFORMING), run the MSTATE_PERFORMING state right away
- urldata.h: factor out upload and download progress counters into a
struct, use that for passing these to progress update functions
- progress.c/getinfo.c: change names of moved progress counters
- progress.c: use new structs and a helper struct to factor repeated
calculation into static helpers
Closes#14335
Returns the time, in microseconds, during which this transfer was held
in a waiting queue before it started "for real". A transfer might be put
in a queue if after getting started, it cannot create a new connection
etc due to set conditions and limits imposed by the application.
Ref: #12293Closes#12368
Previously it would only stop them from getting started if the size is
known to be too big then.
Update the libcurl and curl docs accordingly.
Fixes#11810
Reported-by: Elliot Killick
Assisted-by: Jay Satiro
Closes#11820
New cfilter HTTP-CONNECT for h3/h2/http1.1 eyeballing.
- filter is installed when `--http3` in the tool is used (or
the equivalent CURLOPT_ done in the library)
- starts a QUIC/HTTP/3 connect right away. Should that not
succeed after 100ms (subject to change), a parallel attempt
is started for HTTP/2 and HTTP/1.1 via TCP
- both attempts are subject to IPv6/IPv4 eyeballing, same
as happens for other connections
- tie timeout to the ip-version HAPPY_EYEBALLS_TIMEOUT
- use a `soft` timeout at half the value. When the soft timeout
expires, the HTTPS-CONNECT filter checks if the QUIC filter
has received any data from the server. If not, it will start
the HTTP/2 attempt.
HTTP/3(ngtcp2) improvements.
- setting call_data in all cfilter calls similar to http/2 and vtls filters
for use in callback where no stream data is available.
- returning CURLE_PARTIAL_FILE for prematurely terminated transfers
- enabling pytest test_05 for h3
- shifting functionality to "connect" UDP sockets from ngtcp2
implementation into the udp socket cfilter. Because unconnected
UDP sockets are weird. For example they error when adding to a
pollset.
HTTP/3(quiche) improvements.
- fixed upload bug in quiche implementation, now passes 251 and pytest
- error codes on stream RESET
- improved debug logs
- handling of DRAIN during connect
- limiting pending event queue
HTTP/2 cfilter improvements.
- use LOG_CF macros for dynamic logging in debug build
- fix CURLcode on RST streams to be CURLE_PARTIAL_FILE
- enable pytest test_05 for h2
- fix upload pytests and improve parallel transfer performance.
GOAWAY handling for ngtcp2/quiche
- during connect, when the remote server refuses to accept new connections
and closes immediately (so the local conn goes into DRAIN phase), the
connection is torn down and a another attempt is made after a short grace
period.
This is the behaviour observed with nghttpx when we tell it to shut
down gracefully. Tested in pytest test_03_02.
TLS improvements
- ALPN selection for SSL/SSL-PROXY filters in one vtls set of functions, replaces
copy of logic in all tls backends.
- standardized the infof logging of offered ALPNs
- ALPN negotiated: have common function for all backends that sets alpn proprty
and connection related things based on the negotiated protocol (or lack thereof).
- new tests/tests-httpd/scorecard.py for testing h3/h2 protocol implementation.
Invoke:
python3 tests/tests-httpd/scorecard.py --help
for usage.
Improvements on gathering connect statistics and socket access.
- new CF_CTRL_CONN_REPORT_STATS cfilter control for having cfilters
report connection statistics. This is triggered when the connection
has completely connected.
- new void Curl_pgrsTimeWas(..) method to report a timer update with
a timestamp of when it happend. This allows for updating timers
"later", e.g. a connect statistic after full connectivity has been
reached.
- in case of HTTP eyeballing, the previous changes will update
statistics only from the filter chain that "won" the eyeballing.
- new cfilter query CF_QUERY_SOCKET for retrieving the socket used
by a filter chain.
Added methods Curl_conn_cf_get_socket() and Curl_conn_get_socket()
for convenient use of this query.
- Change VTLS backend to query their sub-filters for the socket when
checks during the handshake are made.
HTTP/3 documentation on how https eyeballing works.
TLS improvements
- ALPN selection for SSL/SSL-PROXY filters in one vtls set of functions, replaces
copy of logic in all tls backends.
- standardized the infof logging of offered ALPNs
- ALPN negotiated: have common function for all backends that sets alpn proprty
and connection related things based on the negotiated protocol (or lack thereof).
Scorecard with Caddy.
- configure can be run with `--with-test-caddy=path` to specify which caddy to use for testing
- tests/tests-httpd/scorecard.py now measures download speeds with caddy
pytest improvements
- adding Makfile to clean gen dir
- adding nghttpx rundir creation on start
- checking httpd version 2.4.55 for test_05 cases where it is needed. Skipping with message if too old.
- catch exception when checking for caddy existance on system.
Closes#10349
- they are mostly pointless in all major jurisdictions
- many big corporations and projects already don't use them
- saves us from pointless churn
- git keeps history for us
- the year range is kept in COPYING
checksrc is updated to allow non-year using copyright statements
Closes#10205
Add licensing and copyright information for all files in this repository. This
either happens in the file itself as a comment header or in the file
`.reuse/dep5`.
This commit also adds a Github workflow to check pull requests and adapts
copyright.pl to the changes.
Closes#8869
Setting a timeout to INT_MAX could cause an immediate error to get
returned as timeout because of an overflow when different values of
'now' were used.
This is primarily fixed by having Curl_pgrsTime() return the "now" when
TIMER_STARTSINGLE is set so that the parent function will continue using
that time.
Reported-by: Ionuț-Francisc Oancea
Fixes#5583Closes#5847
- replace tabs with spaces where possible
- remove line ending spaces
- remove double/triple newlines at EOF
- fix a non-UTF-8 character
- cleanup a few indentations/line continuations
in manual examples
Closes https://github.com/curl/curl/pull/3037
Due to very frequent updates of the rate limit "window", it could
attempt to rate limit within the same milliseconds and that then made
the calculations wrong, leading to it not behaving correctly on very
fast transfers.
This new logic updates the rate limit "window" to be no shorter than the
last three seconds and only updating the timestamps for this when
switching between the states TOOFAST/PERFORM.
Reported-by: 刘佩东
Fixes#2386Closes#2388
Update the progress timers `t_nslookup`, `t_connect`, `t_appconnect`,
`t_pretransfer`, and `t_starttransfer` to track the total times for
these activities when a redirect is followed. Previously, only the times
for the most recent request would be tracked.
Related changes:
- Rename `Curl_pgrsResetTimesSizes` to `Curl_pgrsResetTransferSizes`
now that the function only resets transfer sizes and no longer
modifies any of the progress timers.
- Add a bool to the `Progress` struct that is used to prevent
double-counting `t_starttransfer` times.
Added test case 1399.
Fixes#522 and Known Bug 1.8
Closes#1602
Reported-by: joshhe on github
... to make all libcurl internals able to use the same data types for
the struct members. The timeval struct differs subtly on several
platforms so it makes it cumbersome to use everywhere.
Ref: #1652Closes#1693
Speed limits (from CURLOPT_MAX_RECV_SPEED_LARGE &
CURLOPT_MAX_SEND_SPEED_LARGE) were applied simply by comparing limits
with the cumulative average speed of the entire transfer; While this
might work at times with good/constant connections, in other cases it
can result to the limits simply being "ignored" for more than "short
bursts" (as told in man page).
Consider a download that goes on much slower than the limit for some
time (because bandwidth is used elsewhere, server is slow, whatever the
reason), then once things get better, curl would simply ignore the limit
up until the average speed (since the beginning of the transfer) reached
the limit. This could prove the limit useless to effectively avoid
using the entire bandwidth (at least for quite some time).
So instead, we now use a "moving starting point" as reference, and every
time at least as much as the limit as been transferred, we can reset
this starting point to the current position. This gets a good limiting
effect that applies to the "current speed" with instant reactivity (in
case of sudden speed burst).
Closes#971
In commit 0b3750b5c2 (released in 7.36.0) we fixed a timeout issue
but instead broke the timings.
To fix this, I introduce a new timestamp to use for the timeouts and
restored the previous timestamp and timestamp position so that the old
timer functionality is restored.
In addition to that, that change also broke connection timeouts for when
more than one connect was used (as it would then count the total time
from the first connect and not for the most recent one). Now
Curl_timeleft() has been modified so that it checks against different
start times depending on which timeout it checks.
Test 1303 is updated accordingly.
Bug: http://curl.haxx.se/mail/lib-2014-05/0147.html
Reported-by: Ryan Braud
This reverts renaming and usage of lib/*.h header files done
28-12-2012, reverting 2 commits:
f871de0... build: make use of 76 lib/*.h renamed files
ffd8e12... build: rename 76 lib/*.h files
This also reverts removal of redundant include guard (redundant thanks
to changes in above commits) done 2-12-2013, reverting 1 commit:
c087374... curl_setup.h: remove redundant include guard
This also reverts renaming and usage of lib/*.c source files done
3-12-2013, reverting 3 commits:
13606bb... build: make use of 93 lib/*.c renamed files
5b6e792... build: rename 93 lib/*.c files
7d83dff... build: commit 13606bbfde follow-up 1
Start of related discussion thread:
http://curl.haxx.se/mail/lib-2013-01/0012.html
Asking for confirmation on pushing this revertion commit:
http://curl.haxx.se/mail/lib-2013-01/0048.html
Confirmation summary:
http://curl.haxx.se/mail/lib-2013-01/0079.html
NOTICE: The list of 2 files that have been modified by other
intermixed commits, while renamed, and also by at least one
of the 6 commits this one reverts follows below. These 2 files
will exhibit a hole in history unless git's '--follow' option
is used when viewing logs.
lib/curl_imap.h
lib/curl_smtp.h
Since Curl_pgrsDone() itself calls Curl_pgrsUpdate() which may return an
abort instruction or similar we need to return that info back and
subsequently properly handle return codes from Curl_pgrsDone() where
used.
(Spotted by a Coverity scan)
With FOLLOWLOCATION enabled. When a 3xx page is downloaded and the
download size was known (like with a Content-Length header), but the
subsequent URL (transfered after the 3xx page) was chunked encoded, then
the previous "known download size" would linger and cause the progress
meter to get incorrect information, ie the former value would remain
being sent in. This could easily result in downloads that were WAY
larger than "expected" and would cause >100% outputs with the curl
command line tool.
Test case 599 was created and it was used to repeat the bug and then
verify the fix.
Bug: http://curl.haxx.se/bug/view.cgi?id=3510057
Reported by: Michael Wallner
1- Two new error codes are introduced.
CURLE_FTP_ACCEPT_FAILED to be set whenever ACCEPTing fails because of
FTP server connected.
CURLE_FTP_ACCEPT_TIMEOUT to be set whenever ACCEPTing timeouts.
Neither of these errors are considered fatal and control connection
remains OK because it could just be a firewall blocking server to
connect to the client.
2- One new setopt option was introduced.
CURLOPT_ACCEPTTIMEOUT_MS
It sets the maximum amount of time FTP client is going to wait for a
server to connect. Internal default accept timeout is 60 seconds.
CURLINFO_APPCONNECT_TIME. This is set with the "application layer"
handshake/connection is completed (typically SSL, TLS or SSH). By using this
you can figure out the application layer's own connect time. You can extract
the time stamp using curl's -w option and the new variable named
'time_appconnect'. This feature was sponsored by Lenny Rachitsky at NeuStar.