Refactoring of connection setup and happy eyeballing. Move
nghttp2. ngtcp2, quiche and msh3 into connection filters.
- eyeballing cfilter that uses sub-filters for performing parallel connects
- socket cfilter for all transport types, including QUIC
- QUIC implementations in cfilter, can now participate in eyeballing
- connection setup is more dynamic in order to adapt to what filter did
really connect. Relevant to see if a SSL filter needs to be added or
if SSL has already been provided
- HTTP/3 test cases similar to HTTP/2
- multiuse of parallel transfers for HTTP/3, tested for ngtcp2 and quiche
- Fix for data attach/detach in VTLS filters that could lead to crashes
during parallel transfers.
- Eliminating setup() methods in cfilters, no longer needed.
- Improving Curl_conn_is_alive() to replace Curl_connalive() and
integrated ssl alive checks into cfilter.
- Adding CF_CNTRL_CONN_INFO_UPDATE to tell filters to update
connection into and persist it at the easy handle.
- Several more cfilter related cleanups and moves:
- stream_weigth and dependency info is now wrapped in struct
Curl_data_priority
- Curl_data_priority members depend is available in HTTP2|HTTP3
- Curl_data_priority members depend on NGHTTP2 support
- handling init/reset/cleanup of priority part of url.c
- data->state.priority same struct, but shallow copy for compares only
- PROTOPT_STREAM has been removed
- Curl_conn_is_mulitplex() now available to check on capability
- Adding query method to connection filters.
- ngtcp2+quiche: implementing query for max concurrent transfers.
- Adding is_alive and keep_alive cfilter methods. Adding DATA_SETUP event.
- setting keepalive timestamp on connect
- DATA_SETUP is called after the connection has been completely
setup (but may not connected yet) to allow filters to initialize
data members they use.
- there is no socket to be had with msh3, it is unclear how select
shall work
- manual test via "curl --http3 https://curl.se" fail with "empty
reply from server".
- Various socket/conn related cleanups:
- Curl_socket is now Curl_socket_open and in cf-socket.c
- Curl_closesocket is now Curl_socket_close and in cf-socket.c
- Curl_ssl_use has been replaced with Cur_conn_is_ssl
- Curl_conn_tcp_accepted_set has been split into
Curl_conn_tcp_listen_set and Curl_conn_tcp_accepted_set
with a clearer purpose
Closes#10141
- `Curl_ssl_get_config()` now returns the first config if no SSL proxy
filter is active
- socket filter starts connection only on first invocation of its
connect method
Fixes#9982Closes#9983
- almost all backend calls pass the Curl_cfilter intance instead of
connectdata+sockindex
- ssl_connect_data is remove from struct connectdata and made internal
to vtls
- ssl_connect_data is allocated in the added filter, kept at cf->ctx
- added function to let a ssl filter access its ssl_primary_config and
ssl_config_data this selects the propert subfields in conn and data,
for filters added as plain or proxy
- adjusted all backends to use the changed api
- adjusted all backends to access config data via the exposed
functions, no longer using conn or data directly
cfilter renames for clear purpose:
- methods `Curl_conn_*(data, conn, sockindex)` work on the complete
filter chain at `sockindex` and connection `conn`.
- methods `Curl_cf_*(cf, ...)` work on a specific Curl_cfilter
instance.
- methods `Curl_conn_cf()` work on/with filter instances at a
connection.
- rebased and resolved some naming conflicts
- hostname validation (und session lookup) on SECONDARY use the same
name as on FIRST (again).
new debug macros and removing connectdata from function signatures where not
needed.
adapting schannel for new Curl_read_plain paramter.
Closes#9919
- Adding Curl_conn_is_ip_connected() to check if network connectivity
has been reached
- having ftp wait for network connectivity before proceeding with
transfers.
Fixes test failures 1631 and 1632 with hyper.
Closes#9952
Prior to this change Curl_read_plain would attempt to read the
socket directly. On Windows that's a problem because recv data may be
cached by libcurl and that data is only drained using Curl_recv_plain.
Rather than rewrite Curl_read_plain to handle cached recv data, I
changed it to wrap Curl_recv_plain, in much the same way that
Curl_write_plain already wraps Curl_send_plain.
Curl_read_plain -> Curl_recv_plain
Curl_write_plain -> Curl_send_plain
This fixes a bug in the schannel backend where decryption of arbitrary
TLS records fails because cached recv data is never drained. We send
data (TLS records formed by Schannel) using Curl_write_plain, which
calls Curl_send_plain, and that may do a recv-before-send
("pre-receive") to cache received data. The code calls Curl_read_plain
to read data (TLS records from the server), which prior to this change
did not call Curl_recv_plain and therefore cached recv data wasn't
retrieved, resulting in malformed TLS records and decryption failure
(SEC_E_DECRYPT_FAILURE).
The bug has only been observed during Schannel TLS 1.3 handshakes. Refer
to the issue and PR for more information.
--
This is take 2 of the original fix. It preserves the original behavior
of Curl_read_plain to write 0 to the bytes read parameter on error,
since apparently some callers expect that (SOCKS tests were hanging).
The original fix which landed in 12e1def5 and was later reverted in
18383fbf failed to work properly because it did not do that.
Also, it changes Curl_write_plain the same way to complement
Curl_read_plain, and it changes Curl_send_plain to return -1 instead of
0 on CURLE_AGAIN to complement Curl_recv_plain.
Behavior on error with these changes:
Curl_recv_plain returns -1 and *code receives error code.
Curl_send_plain returns -1 and *code receives error code.
Curl_read_plain returns error code and *n (bytes read) receives 0.
Curl_write_plain returns error code and *written receives 0.
--
Ref: https://github.com/curl/curl/issues/9431#issuecomment-1312420361
Assisted-by: Joel Depooter
Reported-by: Egor Pugin
Fixes https://github.com/curl/curl/issues/9431
Closes https://github.com/curl/curl/pull/9949
Prior to this change Curl_read_plain would attempt to read the
socket directly. On Windows that's a problem because recv data may be
cached by libcurl and that data is only drained using Curl_recv_plain.
Rather than rewrite Curl_read_plain to handle cached recv data, I
changed it to wrap Curl_recv_plain, in much the same way that
Curl_write_plain already wraps Curl_send_plain.
Curl_read_plain -> Curl_recv_plain
Curl_write_plain -> Curl_send_plain
This fixes a bug in the schannel backend where decryption of arbitrary
TLS records fails because cached recv data is never drained. We send
data (TLS records formed by Schannel) using Curl_write_plain, which
calls Curl_send_plain, and that may do a recv-before-send
("pre-receive") to cache received data. The code calls Curl_read_plain
to read data (TLS records from the server), which prior to this change
did not call Curl_recv_plain and therefore cached recv data wasn't
retrieved, resulting in malformed TLS records and decryption failure
(SEC_E_DECRYPT_FAILURE).
The bug has only been observed during Schannel TLS 1.3 handshakes. Refer
to the issue and PR for more information.
Ref: https://github.com/curl/curl/issues/9431#issuecomment-1312420361
Assisted-by: Joel Depooter
Reported-by: Egor Pugin
Fixes https://github.com/curl/curl/issues/9431
Closes https://github.com/curl/curl/pull/9904
- general construct/destroy in connectdata
- default implementations of callback functions
- connect: cfilters for connect and accept
- socks: cfilter for socks proxying
- http_proxy: cfilter for http proxy tunneling
- vtls: cfilters for primary and proxy ssl
- change in general handling of data/conn
- Curl_cfilter_setup() sets up filter chain based on data settings,
if none are installed by the protocol handler setup
- Curl_cfilter_connect() boot straps filters into `connected` status,
used by handlers and multi to reach further stages
- Curl_cfilter_is_connected() to check if a conn is connected,
e.g. all filters have done their work
- Curl_cfilter_get_select_socks() gets the sockets and READ/WRITE
indicators for multi select to work
- Curl_cfilter_data_pending() asks filters if the have incoming
data pending for recv
- Curl_cfilter_recv()/Curl_cfilter_send are the general callbacks
installed in conn->recv/conn->send for io handling
- Curl_cfilter_attach_data()/Curl_cfilter_detach_data() inform filters
and addition/removal of a `data` from their connection
- adding vtl functions to prevent use of Curl_ssl globals directly
in other parts of the code.
Reviewed-by: Daniel Stenberg
Closes#9855
Add licensing and copyright information for all files in this repository. This
either happens in the file itself as a comment header or in the file
`.reuse/dep5`.
This commit also adds a Github workflow to check pull requests and adapts
copyright.pl to the changes.
Closes#8869
For when CURL_DISABLE_VERBOSE_STRINGS and DEBUGBUILD flags are both
active.
- socks.c : warning C4100: 'lineno': unreferenced formal parameter
(co-authored by Daniel Stenberg)
- mbedtls.c: warning C4189: 'port': local variable is initialized but
not referenced
- schannel.c: warning C4189: 'hostname': local variable is initialized
but not referenced
Cloes #7528
- the data needs to be "line-based" anyway since it's also passed to the
debug callback/application
- it makes infof() work like failf() and consistency is good
- there's an assert that triggers on newlines in the format string
- Also removes a few instances of "..."
- Removes the code that would append "..." to the end of the data *iff*
it was truncated in infof()
Closes#7357
Follow-up to 84d2839740 which changed the resolving to always resolve
both address families, but since SOCKS4 only supports IPv4 it should
scan for and use the first available IPv4 address.
Reported-by: shithappens2016 on github
Fixes#7345Closes#7346
The SOCKS code now uses the generic download buffer for temporary
storage during the connection procedure, instead of having its own
private 600 byte buffer that adds to the connectdata struct size. This
works fine because this point the buffer is allocated but is not use for
download yet since the connection hasn't completed.
This reduces the connection struct size by 22% on a 64bit arch!
The SOCKS buffer needs to be at least 600 bytes, and the download buffer
is guaranteed to never be smaller than 1000 bytes.
Closes#6491
... in most cases instead of 'struct connectdata *' but in some cases in
addition to.
- We mostly operate on transfers and not connections.
- We need the transfer handle to log, store data and more. Everything in
libcurl is driven by a transfer (the CURL * in the public API).
- This work clarifies and separates the transfers from the connections
better.
- We should avoid "conn->data". Since individual connections can be used
by many transfers when multiplexing, making sure that conn->data
points to the current and correct transfer at all times is difficult
and has been notoriously error-prone over the years. The goal is to
ultimately remove the conn->data pointer for this reason.
Closes#6425
The resolve call is done with the right port number, but the subsequent
check used the wrong one, which then could find a previous resolve which
would return and leave the fresh resolve "incomplete" and leaking
memory.
Fixes#6247Closes#6253
Failures clearly returned from a (SOCKS) proxy now causes this return
code. Previously the situation was not very clear as what would be
returned and when.
In addition: when this error code is returned, an application can use
CURLINFO_PROXY_ERROR to query libcurl for the detailed error, which then
returns a value from the new 'CURLproxycode' enum.
Closes#5770
Use the unsigned type (size_t) in the arithmetic of pointers. In this
context, the signed type (ssize_t) is used unnecessarily.
Authored-by: ihsinme on github
Closes#5654
The SOCKS4/5 state machines weren't properly terminated when the proxy
connection got closed, leading to a busy-loop.
Reported-By: zloi-user on github
Fixes#5532Closes#5542
Now that all functions in select.[ch] take timediff_t instead
of the limited int or long, we can remove type conversions
and related preprocessor checks to silence compiler warnings.
Avoiding conversions from time_t was already done in 842f73de.
Based upon #5262
Supersedes #5214, #5220 and #5221
Follow up to #5343 and #5479Closes#5490
Commit 4a4b63d forgot to set the expected SOCKS5 reply length when the
reply ATYP is X'01'. This resulted in erroneously expecting more bytes
when the request length is greater than the reply length (e.g., when
remotely resolving the hostname).
Closes#5527
- Stick to a single unified way to use structs
- Make checksrc complain on 'typedef struct {'
- Allow them in tests, public headers and examples
- Let MD4_CTX, MD5_CTX, and SHA256_CTX typedefs remain as they actually
typedef different types/structs depending on build conditions.
Closes#5338
Coverity found CID 1461718:
Integer handling issues (CONSTANT_EXPRESSION_RESULT) "timeout_ms >
9223372036854775807L" is always false regardless of the values of its
operands. This occurs as the logical second operand of "||".
Closes#5240
- Document in Curl_timeleft's comment block that returning 0 signals no
timeout (ie there's infinite time left).
- Fix SOCKS' Curl_blockread_all for the case when no timeout was set.
Prior to this change if the timeout had a value of 0 and that was passed
to SOCKET_READABLE it would return right away instead of blocking. That
was likely because it was not well understood that when Curl_timeleft
returns 0 it is not a timeout of 0 ms but actually means no timeout.
Ref: https://github.com/curl/curl/pull/5214#issuecomment-612512360
Closes https://github.com/curl/curl/pull/5220
- If loss of data may occur converting a timediff_t to time_t and
the time value is > TIME_T_MAX then treat it as TIME_T_MAX.
This is a follow-up to 8843678 which removed the (time_t) typecast
from the macros so that conversion warnings could be identified.
Closes https://github.com/curl/curl/pull/5199
1. The socks4 state machine was broken in the host resolving phase
2. The code now insists on IPv4-only when using SOCKS4 as the protocol
only supports that.
Regression from #4907 and 4a4b63d, shipped in 7.69.0
Reported-by: amishmm on github
Bug: https://github.com/curl/curl/issues/5053#issuecomment-596191594Closes#5061
Prior to this change when a server returned a socks5 connect error then
curl would parse the destination address:port from that data and show it
to the user as the destination:
curld -v --socks5 10.0.3.1:1080 http://google.com:99
* SOCKS5 communication to google.com:99
* SOCKS5 connect to IPv4 172.217.12.206 (locally resolved)
* Can't complete SOCKS5 connection to 253.127.0.0:26673. (1)
curl: (7) Can't complete SOCKS5 connection to 253.127.0.0:26673. (1)
That's incorrect because the address:port included in the connect error
is actually a bind address:port (typically unused) and not the
destination address:port. This fix changes curl to show the destination
information that curl sent to the server instead:
curld -v --socks5 10.0.3.1:1080 http://google.com:99
* SOCKS5 communication to google.com:99
* SOCKS5 connect to IPv4 172.217.7.14:99 (locally resolved)
* Can't complete SOCKS5 connection to 172.217.7.14:99. (1)
curl: (7) Can't complete SOCKS5 connection to 172.217.7.14:99. (1)
curld -v --socks5-hostname 10.0.3.1:1080 http://google.com:99
* SOCKS5 communication to google.com:99
* SOCKS5 connect to google.com:99 (remotely resolved)
* Can't complete SOCKS5 connection to google.com:99. (1)
curl: (7) Can't complete SOCKS5 connection to google.com:99. (1)
Ref: https://tools.ietf.org/html/rfc1928#section-6
Closes https://github.com/curl/curl/pull/4394
Due to limitations in Curl_resolver_wait_resolv(), it doesn't work for
DOH resolves. This fix disables DOH for those.
Limitation added to KNOWN_BUGS.
Fixes#3850Closes#3857
- replace tabs with spaces where possible
- remove line ending spaces
- remove double/triple newlines at EOF
- fix a non-UTF-8 character
- cleanup a few indentations/line continuations
in manual examples
Closes https://github.com/curl/curl/pull/3037