- add `SingleRequest->download_done` as indicator that
all download bytes have been received
- remove `stop_reading` bool from readwrite functions
- move excess body handling into client download writer
Closes#12371
This PR has these changes:
Renaming of unencode_* to cwriter, e.g. client writers
- documentation of sendf.h functions
- move max decode stack checks back to content_encoding.c
- define writer phase which was used as order before
- introduce phases for monitoring inbetween decode phases
- offering default implementations for init/write/close
Add type paramter to client writer's do_write()
- always pass all writes through the writer stack
- writers who only care about BODY data will pass other writes unchanged
add RAW and PROTOCOL client writers
- RAW used for Curl_debug() logging of CURLINFO_DATA_IN
- PROTOCOL used for updates to data->req.bytecount, max_filesize checks and
Curl_pgrsSetDownloadCounter()
- remove all updates of data->req.bytecount and calls to
Curl_pgrsSetDownloadCounter() and Curl_debug() from other code
- adjust test457 expected output to no longer see the excess write
Closes#12184
- move definitions from content_encoding.h to sendf.h
- move create/cleanup/add code into sendf.c
- installed content_encoding writers will always be called
on Curl_client_write(CLIENTWRITE_BODY)
- Curl_client_cleanup() frees writers and tempbuffers from
paused transfers, irregardless of protocol
Closes#11908
- use CLIENTWRITE_BODY *only* when data is actually body data
- add CLIENTWRITE_INFO for meta data that is *not* a HEADER
- debug assertions that BODY/INFO/HEADER is not used mixed
- move `data->set.include_header` check into Curl_client_write
so protocol handlers no longer have to care
- add special in FTP for `data->set.include_header` for historic,
backward compatible reasons
- move unpausing of client writes from easy.c to sendf.c, so that
code is in one place and can forward flags correctly
Closes#11885
- refs #11342 where errors with git https interactions
were observed
- problem was caused by 1st sends of size larger than 64KB
which resulted in later retries of 64KB only
- limit sending of 1st block to 64KB
- adjust h2/h3 filters to cope with parsing the HTTP/1.1
formatted request in chunks
- introducing Curl_nwrite() as companion to Curl_write()
for the many cases where the sockindex is already known
Fixes#11342 (again)
Closes#11803
- delete completed TODO from `./CMakeLists.txt`.
- convert a C++ comment to C89 in `./CMake/CurlTests.c`.
- delete duplicate EOLs from EOF.
- add missing EOL at EOF.
- delete whitespace at EOL (except from expected test results).
- convert tabs to spaces.
- convert CRLF EOLs to LF in GHA yaml.
- text casing fixes in `./CMakeLists.txt`.
- fix a codespell typo in `packages/OS400/initscript.sh`.
Closes#11772
- add an `id` long to Curl_easy, -1 on init
- once added to a multi (or its own multi), it gets
a non-negative number assigned by the connection cache
- `id` is unique among all transfers using the same
cache until reaching LONG_MAX where it will wrap
around. So, not unique eternally.
- CURLINFO_CONN_ID returns the connection id attached to
data or, if none present, data->state.lastconnect_id
- variables and type declared in tool for write out
Closes#11185
- state is fully kept at connection, since curl_ws_send() and
curl_ws_rec() have lifetime beyond usual transfers
- no more limit on frame sizes
Reported-by: simplerobot on github
Fixes#10962Closes#10999
- Curl_write_plain/Curl_read_plain have been eliminated. Last code use
now uses Curl_conn_send/recv so that requests use conn->send/revc
callbacks which defaults to cfilters use.
- Curl_recv_plain/Curl_send_plain have been internalized in cf-socket.c.
- USE_RECV_BEFORE_SEND_WORKAROUND (active on Windows) has been moved
into cf-socket.c. The pre_recv buffer is held at the socket filter
context. `postponed_data` structures have been removed from
`connectdata`.
- the hanger in HTTP/2 request handling was a result of read buffering
on all sends and the multi handling is not prepared for this. The
following happens:
- multi preforms on a HTTP/2 easy handle
- h2 reads and processes data
- this leads to a send of h2 data
- which receives and buffers before the send
- h2 returns
- multi selects on the socket, but no data arrives (its in the buffer already)
the workaround now receives data in a loop as long as there is something in
the buffer. The real fix would be for multi to change, so that `data_pending`
is evaluated before deciding to wait on the socket.
io_buffer, optional, in cf-socket.c, http/2 sets state.drain if lower
filter have pending data.
This io_buffer is only available/used when the
-DUSE_RECV_BEFORE_SEND_WORKAROUND is active, e.g. on Windows
configurations. It also maintains the original checks on protocol
handler being HTTP and conn->send/recv not being replaced.
The HTTP/2 (nghttp2) cfilter now sets data->state.drain when it finds
out that the "lower" filter chain has still pending data at the end of
its IO operation. This prevents the processing from becoming stalled.
Closes#10280
- new functions and macros for cfilter debugging
- set CURL_DEBUG with names of cfilters where debug logging should be
enabled
- use GNUC __attribute__ to enable printf format checks during compile
Closes#10271
- copy `struct Curl_addrinfo` on filter setup into context
- remove `struct Curl_addrinfoi *` with `struct Curl_sockaddr_ex *` in
connectdata that is set and NULLed by the socket filter
- this means we have no reference to the resolver info in connectdata or
its filters
- trigger the CF_CTRL_CONN_INFO_UPDATE event when the complete filter
chain reaches connected status
- update easy handle connection information on CF_CTRL_DATA_SETUP event.
Closes#10213
- they are mostly pointless in all major jurisdictions
- many big corporations and projects already don't use them
- saves us from pointless churn
- git keeps history for us
- the year range is kept in COPYING
checksrc is updated to allow non-year using copyright statements
Closes#10205
Refactoring of connection setup and happy eyeballing. Move
nghttp2. ngtcp2, quiche and msh3 into connection filters.
- eyeballing cfilter that uses sub-filters for performing parallel connects
- socket cfilter for all transport types, including QUIC
- QUIC implementations in cfilter, can now participate in eyeballing
- connection setup is more dynamic in order to adapt to what filter did
really connect. Relevant to see if a SSL filter needs to be added or
if SSL has already been provided
- HTTP/3 test cases similar to HTTP/2
- multiuse of parallel transfers for HTTP/3, tested for ngtcp2 and quiche
- Fix for data attach/detach in VTLS filters that could lead to crashes
during parallel transfers.
- Eliminating setup() methods in cfilters, no longer needed.
- Improving Curl_conn_is_alive() to replace Curl_connalive() and
integrated ssl alive checks into cfilter.
- Adding CF_CNTRL_CONN_INFO_UPDATE to tell filters to update
connection into and persist it at the easy handle.
- Several more cfilter related cleanups and moves:
- stream_weigth and dependency info is now wrapped in struct
Curl_data_priority
- Curl_data_priority members depend is available in HTTP2|HTTP3
- Curl_data_priority members depend on NGHTTP2 support
- handling init/reset/cleanup of priority part of url.c
- data->state.priority same struct, but shallow copy for compares only
- PROTOPT_STREAM has been removed
- Curl_conn_is_mulitplex() now available to check on capability
- Adding query method to connection filters.
- ngtcp2+quiche: implementing query for max concurrent transfers.
- Adding is_alive and keep_alive cfilter methods. Adding DATA_SETUP event.
- setting keepalive timestamp on connect
- DATA_SETUP is called after the connection has been completely
setup (but may not connected yet) to allow filters to initialize
data members they use.
- there is no socket to be had with msh3, it is unclear how select
shall work
- manual test via "curl --http3 https://curl.se" fail with "empty
reply from server".
- Various socket/conn related cleanups:
- Curl_socket is now Curl_socket_open and in cf-socket.c
- Curl_closesocket is now Curl_socket_close and in cf-socket.c
- Curl_ssl_use has been replaced with Cur_conn_is_ssl
- Curl_conn_tcp_accepted_set has been split into
Curl_conn_tcp_listen_set and Curl_conn_tcp_accepted_set
with a clearer purpose
Closes#10141
- almost all backend calls pass the Curl_cfilter intance instead of
connectdata+sockindex
- ssl_connect_data is remove from struct connectdata and made internal
to vtls
- ssl_connect_data is allocated in the added filter, kept at cf->ctx
- added function to let a ssl filter access its ssl_primary_config and
ssl_config_data this selects the propert subfields in conn and data,
for filters added as plain or proxy
- adjusted all backends to use the changed api
- adjusted all backends to access config data via the exposed
functions, no longer using conn or data directly
cfilter renames for clear purpose:
- methods `Curl_conn_*(data, conn, sockindex)` work on the complete
filter chain at `sockindex` and connection `conn`.
- methods `Curl_cf_*(cf, ...)` work on a specific Curl_cfilter
instance.
- methods `Curl_conn_cf()` work on/with filter instances at a
connection.
- rebased and resolved some naming conflicts
- hostname validation (und session lookup) on SECONDARY use the same
name as on FIRST (again).
new debug macros and removing connectdata from function signatures where not
needed.
adapting schannel for new Curl_read_plain paramter.
Closes#9919
Prior to this change Curl_read_plain would attempt to read the
socket directly. On Windows that's a problem because recv data may be
cached by libcurl and that data is only drained using Curl_recv_plain.
Rather than rewrite Curl_read_plain to handle cached recv data, I
changed it to wrap Curl_recv_plain, in much the same way that
Curl_write_plain already wraps Curl_send_plain.
Curl_read_plain -> Curl_recv_plain
Curl_write_plain -> Curl_send_plain
This fixes a bug in the schannel backend where decryption of arbitrary
TLS records fails because cached recv data is never drained. We send
data (TLS records formed by Schannel) using Curl_write_plain, which
calls Curl_send_plain, and that may do a recv-before-send
("pre-receive") to cache received data. The code calls Curl_read_plain
to read data (TLS records from the server), which prior to this change
did not call Curl_recv_plain and therefore cached recv data wasn't
retrieved, resulting in malformed TLS records and decryption failure
(SEC_E_DECRYPT_FAILURE).
The bug has only been observed during Schannel TLS 1.3 handshakes. Refer
to the issue and PR for more information.
--
This is take 2 of the original fix. It preserves the original behavior
of Curl_read_plain to write 0 to the bytes read parameter on error,
since apparently some callers expect that (SOCKS tests were hanging).
The original fix which landed in 12e1def5 and was later reverted in
18383fbf failed to work properly because it did not do that.
Also, it changes Curl_write_plain the same way to complement
Curl_read_plain, and it changes Curl_send_plain to return -1 instead of
0 on CURLE_AGAIN to complement Curl_recv_plain.
Behavior on error with these changes:
Curl_recv_plain returns -1 and *code receives error code.
Curl_send_plain returns -1 and *code receives error code.
Curl_read_plain returns error code and *n (bytes read) receives 0.
Curl_write_plain returns error code and *written receives 0.
--
Ref: https://github.com/curl/curl/issues/9431#issuecomment-1312420361
Assisted-by: Joel Depooter
Reported-by: Egor Pugin
Fixes https://github.com/curl/curl/issues/9431
Closes https://github.com/curl/curl/pull/9949
Prior to this change Curl_read_plain would attempt to read the
socket directly. On Windows that's a problem because recv data may be
cached by libcurl and that data is only drained using Curl_recv_plain.
Rather than rewrite Curl_read_plain to handle cached recv data, I
changed it to wrap Curl_recv_plain, in much the same way that
Curl_write_plain already wraps Curl_send_plain.
Curl_read_plain -> Curl_recv_plain
Curl_write_plain -> Curl_send_plain
This fixes a bug in the schannel backend where decryption of arbitrary
TLS records fails because cached recv data is never drained. We send
data (TLS records formed by Schannel) using Curl_write_plain, which
calls Curl_send_plain, and that may do a recv-before-send
("pre-receive") to cache received data. The code calls Curl_read_plain
to read data (TLS records from the server), which prior to this change
did not call Curl_recv_plain and therefore cached recv data wasn't
retrieved, resulting in malformed TLS records and decryption failure
(SEC_E_DECRYPT_FAILURE).
The bug has only been observed during Schannel TLS 1.3 handshakes. Refer
to the issue and PR for more information.
Ref: https://github.com/curl/curl/issues/9431#issuecomment-1312420361
Assisted-by: Joel Depooter
Reported-by: Egor Pugin
Fixes https://github.com/curl/curl/issues/9431
Closes https://github.com/curl/curl/pull/9904
- general construct/destroy in connectdata
- default implementations of callback functions
- connect: cfilters for connect and accept
- socks: cfilter for socks proxying
- http_proxy: cfilter for http proxy tunneling
- vtls: cfilters for primary and proxy ssl
- change in general handling of data/conn
- Curl_cfilter_setup() sets up filter chain based on data settings,
if none are installed by the protocol handler setup
- Curl_cfilter_connect() boot straps filters into `connected` status,
used by handlers and multi to reach further stages
- Curl_cfilter_is_connected() to check if a conn is connected,
e.g. all filters have done their work
- Curl_cfilter_get_select_socks() gets the sockets and READ/WRITE
indicators for multi select to work
- Curl_cfilter_data_pending() asks filters if the have incoming
data pending for recv
- Curl_cfilter_recv()/Curl_cfilter_send are the general callbacks
installed in conn->recv/conn->send for io handling
- Curl_cfilter_attach_data()/Curl_cfilter_detach_data() inform filters
and addition/removal of a `data` from their connection
- adding vtl functions to prevent use of Curl_ssl globals directly
in other parts of the code.
Reviewed-by: Daniel Stenberg
Closes#9855
At this point, the psnd->buffer will always exist. We have already
allocated a new buffer if one did not previously exist, and returned
from the function if the allocation failed.
Closes#9801
- Fixed an issue where is_in_callback was getting cleared when using web
sockets with debug logging enabled
- Ensure the handle is is_in_callback when calling out to fwrite_func
- Change the write vs. send_data decision to whether or not the handle
is in CONNECT_ONLY mode.
- Account for buflen not including the header length in curl_ws_send
Closes#9665
As virtually no called checked the return code, and those that did
wrongly treated it as a CURLcode. Detected by the icc compiler warning:
enumerated type mixed with another type
Closes#9179
This function no longer returns a negative value if the formatting
string is bad since the return value would sometimes be propagated as a
return code from the mprintf* functions and they are documented to
return the length of the output. Which cannot be negative.
Fixes#9149Closes#9151
Reported-by: yiyuaner on github
Add licensing and copyright information for all files in this repository. This
either happens in the file itself as a comment header or in the file
`.reuse/dep5`.
This commit also adds a Github workflow to check pull requests and adapts
copyright.pl to the changes.
Closes#8869
Historically, Curl_client_write() used a length value of 0 as a marker
for a null-terminated data string. This feature has been removed in
commit f4b85d2. To detect leftover uses of the feature, a DEBUGASSERT
statement rejecting a length with value 0 was introduced, effectively
precluding use of this function with zero-length data.
The current commit removes the DEBUGASSERT and makes the function to
return immediately if length is 0.
A direct effect is to fix trying to output a zero-length distinguished
name in openldap.
Another DEBUGASSERT statement is also rephrased for better readability.
Closes#7898
- the data needs to be "line-based" anyway since it's also passed to the
debug callback/application
- it makes infof() work like failf() and consistency is good
- there's an assert that triggers on newlines in the format string
- Also removes a few instances of "..."
- Removes the code that would append "..." to the end of the data *iff*
it was truncated in infof()
Closes#7357
Also added 'CURL_SMALLSENDS' to make Curl_write() send short packets,
which helped verifying this even more.
Add test 363 to verify.
Reported-by: ustcqidi on github
Fixes#6950Closes#7024
... in most cases instead of 'struct connectdata *' but in some cases in
addition to.
- We mostly operate on transfers and not connections.
- We need the transfer handle to log, store data and more. Everything in
libcurl is driven by a transfer (the CURL * in the public API).
- This work clarifies and separates the transfers from the connections
better.
- We should avoid "conn->data". Since individual connections can be used
by many transfers when multiplexing, making sure that conn->data
points to the current and correct transfer at all times is difficult
and has been notoriously error-prone over the years. The goal is to
ultimately remove the conn->data pointer for this reason.
Closes#6425