To avoid a local hack to pass function pointers and to avoid
deprecation warnings when building with libssh2 v1.11.1 or newer:
```
lib/vssh/libssh2.c:3324:5: warning: 'libssh2_session_callback_set' is deprecated: since libssh2 1.11.1. Use libssh2_session_callback_set2() [-Wdeprecated-declarations]
lib/vssh/libssh2.c:3326:5: warning: 'libssh2_session_callback_set' is deprecated: since libssh2 1.11.1. Use libssh2_session_callback_set2() [-Wdeprecated-declarations]
```
Ref: https://github.com/curl/curl-for-win/actions/runs/7609484879/job/20720821100#step:3:4982
Ref: https://github.com/libssh2/libssh2/pull/1285
Ref: c0f69548be
Reviewed-by: Daniel Stenberg
Closes#12754
- HTTP/3 for curl using OpenSSL's own QUIC stack together
with nghttp3
- configure with `--with-openssl-quic` to enable curl to
build this. This requires the nghttp3 library
- implementation with the following restrictions:
* macOS has to use an unconnected UDP socket due to an
issue in OpenSSL's datagram implementation
See https://github.com/openssl/openssl/issues/23251
This makes connections to non-reponsive servers hang.
* GET requests will send the indicator that they have
no body in a separate QUIC packet. This may result
in processing delays or Transfer-Encodings on proxied
requests
* uploads that encounter blocks will use 100% cpu as
detection of these flow control issue is not working
(we have not figured out to pry that from OpenSSL).
Closes#12734
Fix the `ENABLE_MANUAL` option. Set it to default to `OFF`.
Before this patch `ENABLE_MANUAL=ON` was a no-op, even though it was the
option designed to enable building and using the built-in curl manual.
(`USE_MANUAL=ON` option worked for this instead, by accident).
Ref: https://github.com/curl/curl/pull/12730#issuecomment-1902572409Closes#12749
- remove use of .BI for code snippet
- stop using .br, just do a blank line
- remove use of .PP
- remove use for .sp
- remove backslash in .IP
- use .IP instead of .TP
Closes#12731
The `@filename` style was never documented for --cookie <data|filename>
but prior to this change curl would accept it anyway and always treat a
@ prefixed string as a filename.
That's a problem if the string also contains a = sign because then it is
documented to be interpreted as a cookie string and not a filename.
Example:
`--cookie @foo=bar`
Before: Interpreted as load cookies from filename foo=bar.
After: Interpreted as cookie `@foo=bar` (name `@foo` and value `bar`).
Other curl options with a data/filename option-value use the `@filename`
to distinguish filenames which is probably how this happened. The
--cookie option has never been documented that way.
Ref: https://curl.se/docs/manpage.html#-b
Closes https://github.com/curl/curl/pull/12645
- in en- and decoding, check the websocket frame payload lengths for
negative values (from curl_off_t) and error the operation in that case
- add test 2307 to verify
Closes#12707
- the URL is capped at 80 cols, which ruins it if longer
- it does not strip off URL credentials
- it is done unconditonally, not on --xattr
- we don't have Amiga in the CI which makes fixing it blindly fragile
Someone who builds and tests on Amiga can add it back correctly in a
future if there is a desire.
Reported-by: Harry Sintonen
Closes#12709
- enforce a response body length of 0, if the
response has no Content-lenght. This is according
to the RTSP spec.
- excess bytes in a response body are forwarded to
the client writers which will report and fail the
transfer
Follow-up to d7b6ce6Fixes#12701Closes#12706
The libpsl version output otherwise also includes version number for its
dependencies, like IDN lib, but since libcurl does not use libpsl's IDN
functionality those components are not important.
Ref: https://github.com/curl/curl-for-win/issues/63Closes#12700
... since this funtion has not supported null pointer fd_set arguments since
at least 2006. (That's when I stopped my git blame journey)
Fixes#12691
Reported-by: sfan5 on github
Closes#12692
This clarifies the handling of server responses by folding the code for
the complicated protocols into their protocol handlers. This concerns
mainly HTTP and its bastard sibling RTSP.
The terms "read" and "write" are often used without clear context if
they refer to the connect or the client/application side of a
transfer. This PR uses "read/write" for operations on the client side
and "send/receive" for the connection, e.g. server side. If this is
considered useful, we can revisit renaming of further methods in another
PR.
Curl's protocol handler `readwrite()` method been changed:
```diff
- CURLcode (*readwrite)(struct Curl_easy *data, struct connectdata *conn,
- const char *buf, size_t blen,
- size_t *pconsumed, bool *readmore);
+ CURLcode (*write_resp)(struct Curl_easy *data, const char *buf, size_t blen,
+ bool is_eos, bool *done);
```
The name was changed to clarify that this writes reponse data to the
client side. The parameter changes are:
* `conn` removed as it always operates on `data->conn`
* `pconsumed` removed as the method needs to handle all data on success
* `readmore` removed as no longer necessary
* `is_eos` as indicator that this is the last call for the transfer
response (end-of-stream).
* `done` TRUE on return iff the transfer response is to be treated as
finished
This change affects many files only because of updated comments in
handlers that provide no implementation. The real change is that the
HTTP protocol handlers now provide an implementation.
The HTTP protocol handlers `write_resp()` implementation will get passed
**all** raw data of a server response for the transfer. The HTTP/1.x
formatted status and headers, as well as the undecoded response
body. `Curl_http_write_resp_hds()` is used internally to parse the
response headers and pass them on. This method is public as the RTSP
protocol handler also uses it.
HTTP/1.1 "chunked" transport encoding is now part of the general
*content encoding* writer stack, just like other encodings. A new flag
`CLIENTWRITE_EOS` was added for the last client write. This allows
writers to verify that they are in a valid end state. The chunked
decoder will check if it indeed has seen the last chunk.
The general response handling in `transfer.c:466` happens in function
`readwrite_data()`. This mainly operates now like:
```
static CURLcode readwrite_data(data, ...)
{
do {
Curl_xfer_recv_resp(data, buf)
...
Curl_xfer_write_resp(data, buf)
...
} while(interested);
...
}
```
All the response data handling is implemented in
`Curl_xfer_write_resp()`. It calls the protocol handler's `write_resp()`
implementation if available, or does the default behaviour.
All raw response data needs to pass through this function. Which also
means that anyone in possession of such data may call
`Curl_xfer_write_resp()`.
Closes#12480
Previously it would match only on a sequence of non-space, which made it
miss to highlight for example "public suffix list".
Updated the recent cookie.d edit from 5da57193b7 to use bold instead
of italics.
Closes#12689
Most importantly perhaps is when using OpenSSL that the used
build/flavor has the QUIC API: the vanilla OpenSSL does not, only
BoringSSL, libressl, AWS-LC and quictls do.
Ref: 5d044ad948 (r136780413)Closes#12683
The total timer is properly reset in MSTATE_INIT. MSTATE_CONNECT starts
with resetting the timer that is a start point for further multi states.
If file://, MSTATE_DO calls file_do() that should not reset the total
timer. Otherwise, the total time is always less than the pre-transfer
and the start transfer times.
Closes#12682
- `conn->sockfd` is set by `Curl_setup_transfer()`, but that
is called *after* the connection has been established
- use `conn->sock[FIRSTSOCKET]` instead
Follow-up to a0f94800d5Closes#12664