Output the 'Connected to...' info message when the connection has been
fully established and all information is available.
Due to our happy eyeballing, we should not emit info messages in
filters, because they may be part of an eyeballing attempt and may be
discarded later for another chain.
Closes#14897
Do not give up connect on servers that are in draining state. This might
indicate the QUIC server restarting and the UDP packet routing still
hitting the instance shutting down.
Instead keep on connecting until the overall TIMEOUT fires.
Closes#14863
Always try ipv6 addresses first, ipv4 second after a delay.
If neither ipv4/6 are amongst the supplied addresses, start a happy
eyeballer for the first address family present. This is for AF_UNIX
connects.
Fixes#14761
Reported-by: janedenone on hackerone
Closes#14768
Update IP related information at the connection and the transfer in two
places only: once the filter chain connects and when a transfer is added
to a connection. The latter only updates on reuse when the filters
already are connected.
The only user of that information before a full connect is the HAProxy
filter. Add cfilter CF_QUERY_IP_INFO query to let it find the
information from the filters "below".
This solves two issues with the previous version:
- updates where often done twice with the same info
- happy eyeballing filter "forks" could overwrite each others
updates before the full winner was determined.
Closes#14699
This is a better match for what they do and the general "cpool"
var/function prefix works well.
The pool now handles very long hostnames correctly.
The following changes have been made:
* 'struct connectdata', e.g. connections, keep new members
named `destination` and ' destination_len' that fully specifies
interface+port+hostname of where the connection is going to.
This is used in the pool for "bundling" of connections with
the same destination. There is no limit on the length any more.
* Locking: all locks are done inside conncache.c when calling
into the pool and released on return. This eliminates hazards
of the callers keeping track.
* 'struct connectbundle' is now internal to the pool. It is no
longer referenced by a connection.
* 'bundle->multiuse' no longer exists. HTTP/2 and 3 and TLS filters
no longer need to set it. Instead, the multi checks on leaving
MSTATE_CONNECT or MSTATE_CONNECTING if the connection is now
multiplexed and new, e.g. not conn->bits.reuse. In that case
the processing of pending handles is triggered.
* The pool's init is provided with a callback to invoke on all
connections being discarded. This allows the cleanups in
`Curl_disconnect` to run, wherever it is decided to retire
a connection.
* Several pool operations can now be fully done with one call.
Pruning dead connections, upkeep and checks on pool limits
can now directly discard connections and need no longer return
those to the caller for doing that (as we have now the callback
described above).
* Finding a connection for reuse is now done via `Curl_cpool_find()`
and the caller provides callbacks to evaluate the connection
candidates.
* The 'Curl_cpool_check_limits()' now directly uses the max values
that may be set in the transfer's multi. No need to pass them
around. Curl_multi_max_host_connections() and
Curl_multi_max_total_connections() are gone.
* Add method 'Curl_node_llist()' to get the llist a node is in.
Used in cpool to verify connection are indeed in the list (or
not in any list) as they need to.
I left the conncache.[ch] as is for now and also did not touch the
documentation. If we update that outside the feature window, we can
do this in a separate PR.
Multi-thread safety is not achieved by this PR, but since more details
on how pools operate are now "internal" it is a better starting
point to go for this in the future.
Closes#14662
connections being shutdown would register sockets for events, but then
never remove these sockets again. Nor would the shutdown effectively
been performed.
- If a socket event involves a transfer, check if that is the
connection cache internal handle and run its multi_perform()
instead (the internal handle is used for all shutdowns).
- When a timer triggers for a transfer, check also if it is
about the connection cache internal handle.
- During processing shutdowns in the connection cache, assess
the shutdown timeouts. Register a Curl_expire() of the lowest
value for the cache's internal handle.
Reported-by: Gordon Parke
Fixes#14280Closes#14296
Based on the standards and guidelines we use for our documentation.
- expand contractions (they're => they are etc)
- host name = > hostname
- file name => filename
- user name = username
- man page => manpage
- run-time => runtime
- set-up => setup
- back-end => backend
- a HTTP => an HTTP
- Two spaces after a period => one space after period
Closes#14073
- clarify Curl_xfer_setup() with RECV/SEND flags and different calls for
which socket they operate on. Add a shutdown flag for secondary
sockets
- change Curl_xfer_setup() calls to new functions
- implement non-blocking connection shutdown at the end of receiving or
sending a transfer
Closes#13913
This adds connection shutdown infrastructure and first use for FTP. FTP
data connections, when not encountering an error, are now shut down in a
blocking way with a 2sec timeout.
- add cfilter `Curl_cft_shutdown` callback
- keep a shutdown start timestamp and timeout at connectdata
- provide shutdown timeout default and member in
`data->set.shutdowntimeout`.
- provide methods for starting, interrogating and clearing
shutdown timers
- provide `Curl_conn_shutdown_blocking()` to shutdown the
`sockindex` filter chain in a blocking way. Use that in FTP.
- add `Curl_conn_cf_poll()` to wait for socket events during
shutdown of a connection filter chain.
This gets the monitoring sockets and events via the filters
"adjust_pollset()" methods. This gives correct behaviour when
shutting down a TLS connection through a HTTP/2 proxy.
- Implement shutdown for all socket filters
- for HTTP/2 and h2 proxying to send GOAWAY
- for TLS backends to the best of their capabilities
- for tcp socket filter to make a final, nonblocking
receive to avoid unwanted RST states
- add shutdown forwarding to happy eyeballers and
https connect ballers when applicable.
Closes#13904
Before this patch `lib/curl_setup.h` defined these two macros right
next to each other, then the source code used them interchangeably.
After this patch, `USE_HTTP3` guards all HTTP/3 / QUIC features.
(Like `USE_HTTP2` does for HTTP/2.) `ENABLE_QUIC` is no longer used.
This patch doesn't change the way HTTP/3 is enabled via autotools
or CMake. Builders who enabled HTTP/3 manually by defining both of
these macros via `CPPFLAGS` can now delete `-DENABLE_QUIC`.
Closes#13352
Before this patch, two macros were used to guard IPv6 features in curl
sources: `ENABLE_IPV6` and `USE_IPV6`. This patch makes the source use
the latter for consistency with other similar switches.
`-DENABLE_IPV6` remains accepted for compatibility as a synonym for
`-DUSE_IPV6`, when passed to the compiler.
`ENABLE_IPV6` also remains the name of the CMake and `Makefile.vc`
options to control this feature.
Closes#13349
new struct ip_quadruple for holding local/remote addr+port
- used in data->info and conn and cf-socket.c
- copy back and forth complete struct
- add 'secondary' to conn
- use secondary in reporting success for ftp 2nd connection
Reported-by: DasKutti on github
Fixes#13084Closes#13090
https://best.openssf.org/Compiler-Hardening-Guides/Compiler-Options-Hardening-Guide-for-C-and-C++.html
as of 2023-11-29 [1].
Enable new recommended warnings (except `-Wsign-conversion`):
- enable `-Wformat=2` for clang (in both cmake and autotools).
- add `CURL_PRINTF()` internal attribute and mark functions accepting
printf arguments with it. This is a copy of existing
`CURL_TEMP_PRINTF()` but using `__printf__` to make it compatible
with redefinting the `printf` symbol:
https://gcc.gnu.org/onlinedocs/gcc-3.0.4/gcc_5.html#SEC94
- fix `CURL_PRINTF()` and existing `CURL_TEMP_PRINTF()` for
mingw-w64 and enable it on this platform.
- enable `-Wimplicit-fallthrough`.
- enable `-Wtrampolines`.
- add `-Wsign-conversion` commented with a FIXME.
- cmake: enable `-pedantic-errors` the way we do it with autotools.
Follow-up to d5c0351055#2747
- lib/curl_trc.h: use `CURL_FORMAT()`, this also fixes it to enable format
checks. Previously it was always disabled due to the internal `printf`
macro.
Fix them:
- fix bug where an `set_ipv6_v6only()` call was missed in builds with
`--disable-verbose` / `CURL_DISABLE_VERBOSE_STRINGS=ON`.
- add internal `FALLTHROUGH()` macro.
- replace obsolete fall-through comments with `FALLTHROUGH()`.
- fix fallthrough markups: Delete redundant ones (showing up as
warnings in most cases). Add missing ones. Fix indentation.
- silence `-Wformat-nonliteral` warnings with llvm/clang.
- fix one `-Wformat-nonliteral` warning.
- fix new `-Wformat` and `-Wformat-security` warnings.
- fix `CURL_FORMAT_SOCKET_T` value for mingw-w64. Also move its
definition to `lib/curl_setup.h` allowing use in `tests/server`.
- lib: fix two wrongly passed string arguments in log outputs.
Co-authored-by: Jay Satiro
- fix new `-Wformat` warnings on mingw-w64.
[1] 56c0fde389/docs/Compiler-Hardening-Guides/Compiler-Options-Hardening-Guide-for-C-and-C%2B%2B.mdCloses#12489
- when a connect immediately goes into DRAINING state, do
not attempt retries in the QUIC connection filter. Instead,
return CURLE_WEIRD_SERVER_REPLY
- When eyeballing, interpret CURLE_WEIRD_SERVER_REPLY as an
inconclusive answer. When all addresses have been attempted,
rewind the address list once on an inconclusive answer.
- refs #11832 where connects were retried indefinitely until
the overall timeout fired
Closes#12400
GCC 14 introduces a new -Walloc-size included in -Wextra which gives:
```
src/tool_operate.c: In function ‘add_per_transfer’:
src/tool_operate.c:213:5: warning: allocation of insufficient size ‘1’ for type ‘struct per_transfer’ with size ‘480’ [-Walloc-size]
213 | p = calloc(sizeof(struct per_transfer), 1);
| ^
src/var.c: In function ‘addvariable’:
src/var.c:361:5: warning: allocation of insufficient size ‘1’ for type ‘struct var’ with size ‘32’ [-Walloc-size]
361 | p = calloc(sizeof(struct var), 1);
| ^
```
The calloc prototype is:
```
void *calloc(size_t nmemb, size_t size);
```
So, just swap the number of members and size arguments to match the
prototype, as we're initialising 1 struct of size `sizeof(struct
...)`. GCC then sees we're not doing anything wrong.
Closes#12292
Connection filter had a `get_select_socks()` method, inspired by the
various `getsocks` functions involved during the lifetime of a
transfer. These, depending on transfer state (CONNECT/DO/DONE/ etc.),
return sockets to monitor and flag if this shall be done for POLLIN
and/or POLLOUT.
Due to this design, sockets and flags could only be added, not
removed. This led to problems in filters like HTTP/2 where flow control
prohibits the sending of data until the peer increases the flow
window. The general transfer loop wants to write, adds POLLOUT, the
socket is writeable but no data can be written.
This leads to cpu busy loops. To prevent that, HTTP/2 did set the
`SEND_HOLD` flag of such a blocked transfer, so the transfer loop cedes
further attempts. This works if only one such filter is involved. If a
HTTP/2 transfer goes through a HTTP/2 proxy, two filters are
setting/clearing this flag and may step on each other's toes.
Connection filters `get_select_socks()` is replaced by
`adjust_pollset()`. They get passed a `struct easy_pollset` that keeps
up to `MAX_SOCKSPEREASYHANDLE` sockets and their `POLLIN|POLLOUT`
flags. This struct is initialized in `multi_getsock()` by calling the
various `getsocks()` implementations based on transfer state, as before.
After protocol handlers/transfer loop have set the sockets and flags
they want, the `easy_pollset` is *always* passed to the filters. Filters
"higher" in the chain are called first, starting at the first
not-yet-connection one. Each filter may add sockets and/or change
flags. When all flags are removed, the socket itself is removed from the
pollset.
Example:
* transfer wants to send, adds POLLOUT
* http/2 filter has a flow control block, removes POLLOUT and adds
POLLIN (it is waiting on a WINDOW_UPDATE from the server)
* TLS filter is connected and changes nothing
* h2-proxy filter also has a flow control block on its tunnel stream,
removes POLLOUT and adds POLLIN also.
* socket filter is connected and changes nothing
* The resulting pollset is then mixed together with all other transfers
and their pollsets, just as before.
Use of `SEND_HOLD` is no longer necessary in the filters.
All filters are adapted for the changed method. The handling in
`multi.c` has been adjusted, but its state handling the the protocol
handlers' `getsocks` method are untouched.
The most affected filters are http/2, ngtcp2, quiche and h2-proxy. TLS
filters needed to be adjusted for the connecting handshake read/write
handling.
No noticeable difference in performance was detected in local scorecard
runs.
Closes#11833
... so that it gets called again immediately and can continue trying
addresses to connect to. Otherwise it might unnecessarily wait for a
while there.
Fixes#11920
Reported-by: Loïc Yhuel
Closes#11935
- delete completed TODO from `./CMakeLists.txt`.
- convert a C++ comment to C89 in `./CMake/CurlTests.c`.
- delete duplicate EOLs from EOF.
- add missing EOL at EOF.
- delete whitespace at EOL (except from expected test results).
- convert tabs to spaces.
- convert CRLF EOLs to LF in GHA yaml.
- text casing fixes in `./CMakeLists.txt`.
- fix a codespell typo in `packages/OS400/initscript.sh`.
Closes#11772
When curl wants to connect to a host, it always has a TIMEOUT. The
maximum time it is allowed to spend until a connect is confirmed.
curl will try to connect to each of the IP adresses returned for the
host. Two loops, one for each IP family.
During the connect loop, while curl has more than one IP address left to
try within a single address family, curl has traditionally allowed (time
left/2) for *this* connect attempt. This, to not get stuck on the
initial addresses in case the timeout but still allow later addresses to
get attempted.
This has the downside that when users set a very short timeout and the
host has a large number of IP addresses, the effective result might be
that every attempt gets a little too short time.
This change stop doing the divided-by-two if the total time left is
below a threshold. This threshold is 600 milliseconds.
Closes#11693
Rename `close` and `connect` in `struct Curl_cftype` for
consistency and to avoid clashes with macros of the same name
(the standard AmigaOS networking connect() function is implemented
via a macro).
Closes#11491
- add an `id` long to Curl_easy, -1 on init
- once added to a multi (or its own multi), it gets
a non-negative number assigned by the connection cache
- `id` is unique among all transfers using the same
cache until reaching LONG_MAX where it will wrap
around. So, not unique eternally.
- CURLINFO_CONN_ID returns the connection id attached to
data or, if none present, data->state.lastconnect_id
- variables and type declared in tool for write out
Closes#11185
The open paren check wants to warn for spaces before open parenthesis
for if/while/for but also for any function call. In order to avoid
catching function pointer declarations, the logic allows a space if the
first character after the open parenthesis is an asterisk.
I also spotted what we did not include "switch" in the check but we should.
This check is a little lame, but we reduce this problem by not allowing
that space for if/while/for/switch.
Reported-by: Emanuele Torre
Closes#11044
- currently only on debug build and when env variable
CURL_PROXY_TUNNEL_H2 is present.
- will ALPN negotiate with the proxy server and switch
tunnel filter based on the protocol negotiated.
- http/1.1 tunnel code moved into cf-h1-proxy.[ch]
- http/2 tunnel code implemented in cf-h2-proxy.[ch]
- tunnel start and ALPN set remains in http_proxy.c
- moving all haproxy related code into cf-haproxy.[ch]
VTLS changes
- SSL filters rely solely on the "alpn" specification they
are created with and no longer check conn->bits.tls_enable_alpn.
- checks on which ALPN specification to use (or none at all) are
done in vtls.c when creating the filter.
Testing
- added a nghttpx forward proxy to the pytest setup that
speaks HTTP/2 and forwards all requests to the Apache httpd
forward proxy server.
- extending test coverage in test_10 cases
- adding proxy tests for direct/tunnel h1/h2 use of basic auth.
- adding test for http/1.1 and h2 proxy tunneling to pytest
Closes#10780
- time_connect was not updated when the overall connection failed,
e.g. when SSL verification was unsuccessful, refs #10670
- rework gather those values to interrogate involved filters,
also from all eyeballing attempts, to report the maximum of
those values.
- added 3 test cases in test_06 to check reported values on
successful, partially failed and totally failed connections.
Reported-by: Master Inspire
Fixes#10670Closes#10671
- connect timeout was used at half the configured value, if the
destination had 1 ip version 4 and other version 6 addresses
(or the other way around)
- extended test2600 to reproduce these cases
Reported-by: Michael Kaufmann
Fixes#10514Closes#10517
New cfilter HTTP-CONNECT for h3/h2/http1.1 eyeballing.
- filter is installed when `--http3` in the tool is used (or
the equivalent CURLOPT_ done in the library)
- starts a QUIC/HTTP/3 connect right away. Should that not
succeed after 100ms (subject to change), a parallel attempt
is started for HTTP/2 and HTTP/1.1 via TCP
- both attempts are subject to IPv6/IPv4 eyeballing, same
as happens for other connections
- tie timeout to the ip-version HAPPY_EYEBALLS_TIMEOUT
- use a `soft` timeout at half the value. When the soft timeout
expires, the HTTPS-CONNECT filter checks if the QUIC filter
has received any data from the server. If not, it will start
the HTTP/2 attempt.
HTTP/3(ngtcp2) improvements.
- setting call_data in all cfilter calls similar to http/2 and vtls filters
for use in callback where no stream data is available.
- returning CURLE_PARTIAL_FILE for prematurely terminated transfers
- enabling pytest test_05 for h3
- shifting functionality to "connect" UDP sockets from ngtcp2
implementation into the udp socket cfilter. Because unconnected
UDP sockets are weird. For example they error when adding to a
pollset.
HTTP/3(quiche) improvements.
- fixed upload bug in quiche implementation, now passes 251 and pytest
- error codes on stream RESET
- improved debug logs
- handling of DRAIN during connect
- limiting pending event queue
HTTP/2 cfilter improvements.
- use LOG_CF macros for dynamic logging in debug build
- fix CURLcode on RST streams to be CURLE_PARTIAL_FILE
- enable pytest test_05 for h2
- fix upload pytests and improve parallel transfer performance.
GOAWAY handling for ngtcp2/quiche
- during connect, when the remote server refuses to accept new connections
and closes immediately (so the local conn goes into DRAIN phase), the
connection is torn down and a another attempt is made after a short grace
period.
This is the behaviour observed with nghttpx when we tell it to shut
down gracefully. Tested in pytest test_03_02.
TLS improvements
- ALPN selection for SSL/SSL-PROXY filters in one vtls set of functions, replaces
copy of logic in all tls backends.
- standardized the infof logging of offered ALPNs
- ALPN negotiated: have common function for all backends that sets alpn proprty
and connection related things based on the negotiated protocol (or lack thereof).
- new tests/tests-httpd/scorecard.py for testing h3/h2 protocol implementation.
Invoke:
python3 tests/tests-httpd/scorecard.py --help
for usage.
Improvements on gathering connect statistics and socket access.
- new CF_CTRL_CONN_REPORT_STATS cfilter control for having cfilters
report connection statistics. This is triggered when the connection
has completely connected.
- new void Curl_pgrsTimeWas(..) method to report a timer update with
a timestamp of when it happend. This allows for updating timers
"later", e.g. a connect statistic after full connectivity has been
reached.
- in case of HTTP eyeballing, the previous changes will update
statistics only from the filter chain that "won" the eyeballing.
- new cfilter query CF_QUERY_SOCKET for retrieving the socket used
by a filter chain.
Added methods Curl_conn_cf_get_socket() and Curl_conn_get_socket()
for convenient use of this query.
- Change VTLS backend to query their sub-filters for the socket when
checks during the handshake are made.
HTTP/3 documentation on how https eyeballing works.
TLS improvements
- ALPN selection for SSL/SSL-PROXY filters in one vtls set of functions, replaces
copy of logic in all tls backends.
- standardized the infof logging of offered ALPNs
- ALPN negotiated: have common function for all backends that sets alpn proprty
and connection related things based on the negotiated protocol (or lack thereof).
Scorecard with Caddy.
- configure can be run with `--with-test-caddy=path` to specify which caddy to use for testing
- tests/tests-httpd/scorecard.py now measures download speeds with caddy
pytest improvements
- adding Makfile to clean gen dir
- adding nghttpx rundir creation on start
- checking httpd version 2.4.55 for test_05 cases where it is needed. Skipping with message if too old.
- catch exception when checking for caddy existance on system.
Closes#10349
- add test2600 as a unit test that triggers various connect conditions
and monitors behaviour, available in a debug build only.
- this exposed edge cases in connect.c that have been fixed
Closes#10312
- new functions and macros for cfilter debugging
- set CURL_DEBUG with names of cfilters where debug logging should be
enabled
- use GNUC __attribute__ to enable printf format checks during compile
Closes#10271
- ECONNECTREFUSED has not its own fail message in quic filters
- Debug logging in connect eyballing improved
- Fix bug in ngtcp2/quiche that could lead to false success reporting.
Reported-by: Divy Le Ray
Fixes#10245Closes#10248