Move all handling of HTTP's `Expect: 100-continue` feature into a client
reader. Add sending flag `KEEP_SEND_TIMED` that triggers transfer
sending on general events like a timer.
HTTP installs a `CURL_CR_PROTOCOL` reader when announcing `Expect:
100-continue`. That reader works as follows:
- on first invocation, records time, starts the `EXPIRE_100_TIMEOUT`
timer, disables `KEEP_SEND`, enables `KEEP_SEND_TIMER` and returns 0,
eos=FALSE like a paused upload.
- on subsequent invocation it checks if the timer has expired. If so, it
enables `KEEP_SEND` and switches to passing through reads to the
underlying readers.
Transfer handling's `readwrite()` will be invoked when a timer expires
(like `EXPIRE_100_TIMEOUT`) or when data from the server arrives. Seeing
`KEEP_SEND_TIMER`, it will try to upload more data, which triggers
reading from the client readers again. Which then may lead to a new
pausing or cause the upload to start.
Flags and timestamps connected to this have been moved from
`SingleRequest` into the reader's context.
Closes#13110
- When curl sees a TCP close from the peer, do not start a TLS shutdown.
TLS shutdown is a handshake and if the peer already closed the
connection, it is not interested in participating.
Reported-by: dfdity on github
Assisted-by: Jiří Bok
Assisted-by: Pēteris Caune
Fixes#10290Closes#13087
A transfer may do several `SingleRequest`s for its success. This happens
regularly for authentication, follows and retries on failed connections.
The "readwrite()" calls and functions connected to those carried a `bool
*done` parameter to indicate that the current `SingleRequest` is over.
This may happen before `upload_done` or `download_done` bits of
`SingleRequest` are set.
The problem with that is now `write_resp()` protocol handlers are
invoked in places where the `bool *done` cannot be passed up to the
caller. Instead of being a bool in the call chain, it needs to become a
member of `SingleRequest`, reflecting its state.
This removes the `bool *done` parameter and adds the `done` bit to
`SingleRequest` instead. It adds `Curl_req_soft_reset()` for using a
`SingleRequest` in a follow up, clearing `done` and other
flags/counters.
Closes#13096
new struct ip_quadruple for holding local/remote addr+port
- used in data->info and conn and cf-socket.c
- copy back and forth complete struct
- add 'secondary' to conn
- use secondary in reporting success for ftp 2nd connection
Reported-by: DasKutti on github
Fixes#13084Closes#13090
- seek_func/seek_client, use transfer values only
- remove copies held in `struct connectdata`, use only
ever `data->set.seek_func`
- resolves possible issues in multiuse connections
- new mime post reader eliminates need to ever overwriting this
- websockets, remove empty Curl_ws_done() function
Closes#13079
- Store the c-ares version during global init.
Prior to this change several threads could write the same data to a
static int variable at the same time. Though in practice it's not a
problem ThreadSanitizer may warn.
Reported-by: Nikita Taranov
Assisted-by: Jay Satiro
Fixes#13065Closes#13000
Just a tidy up to contain 'ifdef' pollution of common
code parts with implementation specifics.
- remove the ifdef hyper unpausing in easy.c
- add hyper client reader for CURL_CR_PROTOCOL phase
that implements the unpause method for calling
the hyper waker if it is set
Closes#13075
This is a follow-up for PR #12897.
Add support for SHA-512/256 digest calculation by TLS backends.
Currently only OpenSSL and GnuTLS (actually, nettle) support
SHA-512/256.
Closes#13070
- `struct Curl_cwriter` and `struct Curl_creader` now carry a
`void *ctx` member that points to the instance as allocated.
- using `r->ctx` and `w->ctx` as pointer to the instance specific
struct that has been allocated
Reported-by: Rudi Heitbaum
Fixes#13035Closes#13059
- the change breaks looping in transfer.c receive for transfers that are
speed limited on having gotten *some* bytes.
- the overall speed limit timing is done in multi.c
Reported-by: Dmitry Karpov
Bug: https://curl.se/mail/lib-2024-03/0001.htmlCloses#13050
Add `mime` client reader. Encapsulates reading from mime parts, getting
their length, rewinding and unpausing.
- remove special mime handling from sendf.c and easy.c
- add general "unpause" method to client readers
- use new reader in http/imap/smtp
- make some mime functions static that are now only used internally
In addition:
- remove flag 'forbidchunk' as no longer needed
Closes#13039
- set TIMER_STARTTRANSFER on seeing the first response bytes
in the download client writer, not coming from a CONNECT
- initialized the timer the same way for all protocols
- remove explicit setting of TIMER_STARTTRANSFER in file.c
and c-hyper.c
Closes#13052
If a response without a status line is received, and the connection is
known to use HTTP/1.x (not HTTP/0.9), report the error "Invalid status
line" instead of "Received HTTP/0.9 when not allowed".
Closes#13045
In cases where the connection was fast, curl sometimes failed to open a
connection. This fixes a regression of c2d973627b.
The regression triggered in these steps:
1. Create an smtp connection
2. Use STARTTLS
3. Receive the response
4. We are inside the loop in `smtp_statemachine`, calling
`smtp_state_starttls_resp`
5. In the good flow, we exit the loop, re-enter `smtp_statemachine` and
run `smtp_perform_upgrade_tls` at the start of the function.
In the bad flow, we stay in the while loop, calling
`Curl_pp_readresp`, which reads part of the TLS handshake and things
go wrong.
The reason is that `Curl_pp_moredata` changed behavior and always
returns `true`, so we stay in the loop in `smtp_statemachine`. With a
slow connection `Curl_pp_readresp` cannot read new data and returns
`CURL_AGAIN`, so we leave the loop and re-enter `smtp_statemachine`.
With a fast connection, `Curl_pp_readresp` reads new data from the tcp
connection, which is part of the TLS handshake.
The fix is in `Curl_pp_moredata`, which needs to take the final line
into account and return `false` if only the final line is stored.
Closes#13048
- update client reader documentation
- client reader, add rewind capabilities
- tell creader to rewind on next start
- Curl_client_reset() will keep reader for future rewind if requested
- add Curl_client_cleanup() for freeing all resources independent of
rewinds
- add Curl_client_start() to trigger rewinds
- move rewind code from multi.c to sendf.c and make part of
"cr-in"'s implementation
- http, move the "resume_from" handling into the client readers
- the setup of a HTTP request is reshuffled to follow:
* determine method, target, auth negotiation
* install the client reader(s) for the request, including crlf
conversions and "chunked" encoding
* apply ranges to client reader
* concat request headers, upgrades, cookies, etc.
* complete request by determining Content-Length of installed
readers in combination with method
* send
- add methods for client readers to
* return the overall length they will generate (or -1 when unknown)
* return the amount of data on the CLIENT level, so that
expect-100 can decide if it want to apply itself
* set a "resume_from" offset or fail if unsupported
- struct HTTP has become largely empty now
- rename `Client_reader_*` to `Curl_creader_*`
Closes#13026
Caused by an accidentally duplicated line in
d6825df334.
```
.../lib/vquic/curl_osslq.c:1095:30: warning: implicit conversion loses integer precision: 'curl_socket_t' (aka 'unsigned long long') to 'int' [-Wshorten-64-to-32]
1095 | bio = BIO_new_dgram(ctx->q.sockfd, BIO_NOCLOSE);
| ~~~~~~~~~~~~~ ~~~~~~~^~~~~~
1 warning and 2 errors generated.
```
Reviewed-by: Stefan Eissing
Closes#13043
- rename static functions to avoid duplicate symbols in unity mode.
- windows -> Windows/window in error message and comment.
- fix indentation.
Reviewed-by: Stefan Eissing
Closes#13044
A libpsl install without data and no built-in database is now considered
bad enough to reject all cookies since they cannot be checked. It is
somewhat of a user error, but still.
Reported-by: Dan Fandrich
Closes#13033
- Move all the "upload_done" handling to request.c
- add possibility to abort sending of a request
- add `Curl_req_done_sending()` for checks
- transfer.c: readwrite_upload() now clean
- removing data->state.ulbuf and data->req.upload_fromhere
- as well as data->req.upload_present
- set data->req.upload_done on having read all from
the client and completely flushed the send buffer
- tftp, remove setting of data->req.upload_fromhere
- serves no purpose as `upload_present` is not set
and the data itself is directly `sendto()` anyway
- smtp, make upload EOB conversion a client reader
- xfer_ulbuf addition
- add xfer_ulbuf for borrowing, similar to xfer_buf
- use in file upload
- use in c-hyper body sending
- h1-proxy, remove init of data->state.uilbuf that is never used
- smb, add own send_buf instead of using data->state.ulbuf
Closes#13010
- when unable to obtain a new chunk on a softlimit bufq,
this is an allocation error and needs to be reported as
such.
- writes into a soflimit bufq never must be partial success
Reported-by: Dan Fandrich
Fixes#13020Closes#13023
This fixes miscellaneous typos and duplicated words in the docs, lib
and test comments and a few user facing errorstrings.
Author: RainRat on Github
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Dan Fandrich <dan@coneharvesters.com>
Closes: #13019
- replace `Curl_read()`, `Curl_write()` and `Curl_nwrite()` to
clarify when and at what level they operate
- send/recv of transfer related data is now done via
`Curl_xfer_send()/Curl_xfer_recv()` which no longer has
socket/socketindex as parameter. It decides on the transfer
setup of `conn->sockfd` and `conn->writesockfd` on which
connection filter chain to operate.
- send/recv on a specific connection filter chain is done via
`Curl_conn_send()/Curl_conn_recv()` which get the socket index
as parameter.
- rename `Curl_setup_transfer()` to `Curl_xfer_setup()` for
naming consistency
- clarify that the special CURLE_AGAIN hangling to return
`CURLE_OK` with length 0 only applies to `Curl_xfer_send()`
and CURLE_AGAIN is returned by all other send() variants.
- fix a bug in websocket `curl_ws_recv()` that mixed up data
when it arrived in more than a single chunk (to be made
into a sperate PR, also)
Added as documented [in
CLIENT-READER.md](5b1f31dfba/docs/CLIENT-READERS.md).
- old `Curl_buffer_send()` completely replaced by new `Curl_req_send()`
- old `Curl_fillreadbuffer()` replaced with `Curl_client_read()`
- HTTP chunked uploads are now formatted in a client reader added when
needed.
- FTP line-end conversions are done in a client reader added when
needed.
- when sending requests headers, remaining buffer space is filled with
body data for sending in "one go". This is independent of the request
body size. Resolves#12938 as now small and large requests have the
same code path.
Changes done to test cases:
- test513: now fails before sending request headers as this initial
"client read" triggers the setup fault. Behaves now the same as in
hyper build
- test547, test555, test1620: fix the length check in the lib code to
only fail for reads *smaller* than expected. This was a bug in the
test code that never triggered in the old implementation.
Closes#12969
The curldown conversion accidentally replaced daniel@haxx.se with
just daniel.se. This reverts back to the proper email address in
the curldown docs as well as in a few other stray places where it
was incorrect (while unrelated to curldown).
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Closes: #12997
When disabling all protocols without enabling any, the resulting
set of allowed protocols remained the default set. Clearing the
allowed set before inspecting the passed value from --proto make
the set empty even in the errorpath of no protocols enabled.
Co-authored-by: Dan Fandrich <dan@telarity.com>
Reported-by: Dan Fandrich <dan@telarity.com>
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Closes: #13004
- replace `Curl_read()`, `Curl_write()` and `Curl_nwrite()` to
clarify when and at what level they operate
- send/recv of transfer related data is now done via
`Curl_xfer_send()/Curl_xfer_recv()` which no longer has
socket/socketindex as parameter. It decides on the transfer
setup of `conn->sockfd` and `conn->writesockfd` on which
connection filter chain to operate.
- send/recv on a specific connection filter chain is done via
`Curl_conn_send()/Curl_conn_recv()` which get the socket index
as parameter.
- rename `Curl_setup_transfer()` to `Curl_xfer_setup()` for
naming consistency
- clarify that the special CURLE_AGAIN hangling to return
`CURLE_OK` with length 0 only applies to `Curl_xfer_send()`
and CURLE_AGAIN is returned by all other send() variants.
- fix a bug in websocket `curl_ws_recv()` that mixed up data
when it arrived in more than a single chunk
The method for sending not just raw bytes, but bytes that are either
"headers" or "body". The send abstraction stack, to to bottom, now is:
* `Curl_req_send()`: has parameter to indicate amount of header bytes,
buffers all data.
* `Curl_xfer_send()`: knows on which socket index to send, returns
amount of bytes sent.
* `Curl_conn_send()`: called with socket index, returns amount of bytes
sent.
In addition there is `Curl_req_flush()` for writing out all buffered
bytes.
`Curl_req_send()` is active for requests without body,
`Curl_buffer_send()` still being used for others. This is because the
special quirks need to be addressed in future parts:
* `expect-100` handling
* `Curl_fillreadbuffer()` needs to add directly to the new
`data->req.sendbuf`
* special body handlings, like `chunked` encodings and line end
conversions will be moved into something like a Client Reader.
In functions of the pattern `CURLcode xxx_send(..., ssize_t *written)`,
replace the `ssize_t` with a `size_t`. It makes no sense to allow for negative
values as the returned `CURLcode` already specifies error conditions. This
allows easier handling of lengths without casting.
Closes#12964
If the easy handle that is being added to a multi handle has previously
been used for curl_easy_perform(), there is a private multi handle here
that we can kill off. While it flushes some caches etc for the easy
handle would it be used for an easy interface transfer again after being
used in the multi stack, this cleanup simplifies behavior and uses less
memory.
Closes#12992
Curl_read/Curl_write clarifications
- replace `Curl_read()`, `Curl_write()` and `Curl_nwrite()` to 1clarify
when and at what level they operate
- send/recv of transfer related data is now done via
`Curl_xfer_send()/Curl_xfer_recv()` which no longer has
socket/socketindex as parameter. It decides on the transfer setup of
`conn->sockfd` and `conn->writesockfd` on which connection filter
chain to operate.
- send/recv on a specific connection filter chain is done via
`Curl_conn_send()/Curl_conn_recv()` which get the socket index as
parameter.
- rename `Curl_setup_transfer()` to `Curl_xfer_setup()` for naming
consistency
- clarify that the special CURLE_AGAIN handling to return `CURLE_OK`
with length 0 only applies to `Curl_xfer_send()` and CURLE_AGAIN is
returned by all other send() variants.
SingleRequest reshuffling
- move functions into request.[ch]
- differentiate between reset and free
- add Curl_req_done() to perform last actions
- add a send `bufq` to SingleRequest for future use in keeping upload data
Closes#12963
In order to make MSAN happy:
==2200945==WARNING: MemorySanitizer: use-of-uninitialized-value
#0 0x596f3b3ed246 in curlx_strtoofft [...]/libcurl/src/lib/strtoofft.c:239:11
#1 0x596f3b402156 in Curl_httpchunk_read [...]/libcurl/src/lib/http_chunks.c:149:12
#2 0x596f3b348550 in readwrite_data [...]/libcurl/src/lib/transfer.c:607:11
[...]
==2202041==WARNING: MemorySanitizer: use-of-uninitialized-value
#0 0x5a3fab66a72a in Curl_parse_port [...]/libcurl/src/lib/urlapi.c:547:8
#1 0x5a3fab650645 in parse_authority [...]/libcurl/src/lib/urlapi.c:796:12
#2 0x5a3fab6740f6 in parseurl [...]/libcurl/src/lib/urlapi.c:1176:16
#3 0x5a3fab664fc5 in parseurl_and_replace [...]/libcurl/src/lib/urlapi.c:1342:12
[...]
==2202320==WARNING: MemorySanitizer: use-of-uninitialized-value
#0 0x569076a0d6b0 in ipv4_normalize [...]/libcurl/src/lib/urlapi.c:683:12
#1 0x5690769f2820 in parse_authority [...]/libcurl/src/lib/urlapi.c:803:10
#2 0x569076a160f6 in parseurl [...]/libcurl/src/lib/urlapi.c:1176:16
#3 0x569076a06fc5 in parseurl_and_replace [...]/libcurl/src/lib/urlapi.c:1342:12
[...]
Signed-off-by: Louis Solofrizzo <lsolofrizzo@scaleway.com>
Closes#12995
Refactoring of the client writer that passes the data to the
client/application's callback functions.
- split out into own source cw-out.[ch] from sendf.c
- move tempwrite and tempcount from data->state into the context of the
client writer
- redesign the 3 tempwrite dynbufs as a linked list of dynbufs. On
paused transfers, this allows to "record" interleaved HEADER/BODY
chunks to be "played back" in the same order on unpausing.
- keep the overall size limit of all buffered data to DYN_PAUSE_BUFFER.
On exceeding that, return CURLE_TOO_LARGE instead of
CURLE_OUT_OF_MEMORY as before.
- add method to be called when a transfer is DONE to allow writing of
any data still buffered
- when paused, record HEADER writes exactly as they come for later
playback. HEADERs are documented to be written one-by-one.
Closes#12898