`CURLDEBUG` is meant to enable memory tracking, but in a bunch of cases,
it was protecting debug features that were supposed to be guarded with
`DEBUGBUILD`.
Replace these uses with `DEBUGBUILD`.
This leaves `CURLDEBUG` uses solely for its intended purpose: to enable
the memory tracking debug feature.
Also:
- autotools: rely on `DEBUGBUILD` to enable `checksrc`.
Instead of `CURLDEBUG`, which worked in most cases because debug
builds enable `CURLDEBUG` by default, but it's not accurate.
- include `lib/easyif.h` instead of keeping a copy of a declaration.
- add CI test jobs for the build issues discovered.
Ref: https://github.com/curl/curl/pull/13694#issuecomment-2120311894Closes#13718
- HEADERFUNCTIONS might inspect response properties like
CURLINFO_CONTENT_LENGTH_DOWNLOAD_T on seeing the last header line. If
the line is being written before this is initialized, values are not
available.
- write the last header line late when analyzing a HTTP response so that
all information is available at the time of the writing.
- add test1485 to verify that CURLINFO_CONTENT_LENGTH_DOWNLOAD_T works
on seeing the last header.
Fixes#13752
Reported-by: Harry Sintonen
Closes#13757
- as reported in #13725, some servers wrongly send body bytes in
responses to a HEAD request. This used to be tolerated in curl
8.4 and before and leads to failed transfers in newer versions.
- restore previous behaviour for HTTP/1.1 and HTTP/2:
* 1.1: do not add 'Transfer-Encoding' writers from HEAD
responses. RFC 9112 says they do not apply.
* 2: when the transfer expects 'no_body', to not report stream
resets as error when all response headers have been received.
Reported-by: Jeroen Ooms
Fixes#13725Closes#13732
A connection that has seen an HTTP major version now refuses any other
major HTTP version in future responses. Previously, a HTTP/1.x
connection would just silently accept HTTP/2 or HTTP/3 in the status
lines as long as it had support for those built-in. It would then just
lead to confusion and badness.
Indirectly Spotted by CodeSonar which identified a duplicate assignment
in this function.
Add test 471 to verify
Closes#13421
Before this patch `lib/curl_setup.h` defined these two macros right
next to each other, then the source code used them interchangeably.
After this patch, `USE_HTTP3` guards all HTTP/3 / QUIC features.
(Like `USE_HTTP2` does for HTTP/2.) `ENABLE_QUIC` is no longer used.
This patch doesn't change the way HTTP/3 is enabled via autotools
or CMake. Builders who enabled HTTP/3 manually by defining both of
these macros via `CPPFLAGS` can now delete `-DENABLE_QUIC`.
Closes#13352
Reduced size of dynamically_allocated_data structure.
Reduced number of stored values in enum dupstring and enum dupblob. This
affects the reduced array placed in the UserDefined structure.
Closes#13188
- when an application forces HTTP/1.1 chunked transfer encoding
by setting the corresponding header and instructs curl to use
the CURLOPT_READFUNCTION, disregard any POST length information.
- this establishes backward compatibility with previous curl versions
Applications are encouraged to not force "chunked", but rather
set length information for a POST. By setting -1, curl will
auto-select chunked on HTTP/1.1 and work properly on other HTTP
versions.
Reported-by: Jeff King
Fixes#13229Closes#13257
- move code that triggers on end-of-response into separate function from
parsing
- simplify some headp/headerlen usage
- add `httpversion` to SingleRequest to indicate the version of the
current response
Closes#13134
Saving some cpu cycles in http response header processing:
- pass the length of the header line along
- use string constant sizeof() instead of strlen()
- check line length if prefix is possible
- switch on first header char to limit checks
Closes#13143
Move all handling of HTTP's `Expect: 100-continue` feature into a client
reader. Add sending flag `KEEP_SEND_TIMED` that triggers transfer
sending on general events like a timer.
HTTP installs a `CURL_CR_PROTOCOL` reader when announcing `Expect:
100-continue`. That reader works as follows:
- on first invocation, records time, starts the `EXPIRE_100_TIMEOUT`
timer, disables `KEEP_SEND`, enables `KEEP_SEND_TIMER` and returns 0,
eos=FALSE like a paused upload.
- on subsequent invocation it checks if the timer has expired. If so, it
enables `KEEP_SEND` and switches to passing through reads to the
underlying readers.
Transfer handling's `readwrite()` will be invoked when a timer expires
(like `EXPIRE_100_TIMEOUT`) or when data from the server arrives. Seeing
`KEEP_SEND_TIMER`, it will try to upload more data, which triggers
reading from the client readers again. Which then may lead to a new
pausing or cause the upload to start.
Flags and timestamps connected to this have been moved from
`SingleRequest` into the reader's context.
Closes#13110
A transfer may do several `SingleRequest`s for its success. This happens
regularly for authentication, follows and retries on failed connections.
The "readwrite()" calls and functions connected to those carried a `bool
*done` parameter to indicate that the current `SingleRequest` is over.
This may happen before `upload_done` or `download_done` bits of
`SingleRequest` are set.
The problem with that is now `write_resp()` protocol handlers are
invoked in places where the `bool *done` cannot be passed up to the
caller. Instead of being a bool in the call chain, it needs to become a
member of `SingleRequest`, reflecting its state.
This removes the `bool *done` parameter and adds the `done` bit to
`SingleRequest` instead. It adds `Curl_req_soft_reset()` for using a
`SingleRequest` in a follow up, clearing `done` and other
flags/counters.
Closes#13096
- seek_func/seek_client, use transfer values only
- remove copies held in `struct connectdata`, use only
ever `data->set.seek_func`
- resolves possible issues in multiuse connections
- new mime post reader eliminates need to ever overwriting this
- websockets, remove empty Curl_ws_done() function
Closes#13079
Add `mime` client reader. Encapsulates reading from mime parts, getting
their length, rewinding and unpausing.
- remove special mime handling from sendf.c and easy.c
- add general "unpause" method to client readers
- use new reader in http/imap/smtp
- make some mime functions static that are now only used internally
In addition:
- remove flag 'forbidchunk' as no longer needed
Closes#13039
If a response without a status line is received, and the connection is
known to use HTTP/1.x (not HTTP/0.9), report the error "Invalid status
line" instead of "Received HTTP/0.9 when not allowed".
Closes#13045
- update client reader documentation
- client reader, add rewind capabilities
- tell creader to rewind on next start
- Curl_client_reset() will keep reader for future rewind if requested
- add Curl_client_cleanup() for freeing all resources independent of
rewinds
- add Curl_client_start() to trigger rewinds
- move rewind code from multi.c to sendf.c and make part of
"cr-in"'s implementation
- http, move the "resume_from" handling into the client readers
- the setup of a HTTP request is reshuffled to follow:
* determine method, target, auth negotiation
* install the client reader(s) for the request, including crlf
conversions and "chunked" encoding
* apply ranges to client reader
* concat request headers, upgrades, cookies, etc.
* complete request by determining Content-Length of installed
readers in combination with method
* send
- add methods for client readers to
* return the overall length they will generate (or -1 when unknown)
* return the amount of data on the CLIENT level, so that
expect-100 can decide if it want to apply itself
* set a "resume_from" offset or fail if unsupported
- struct HTTP has become largely empty now
- rename `Client_reader_*` to `Curl_creader_*`
Closes#13026
- Move all the "upload_done" handling to request.c
- add possibility to abort sending of a request
- add `Curl_req_done_sending()` for checks
- transfer.c: readwrite_upload() now clean
- removing data->state.ulbuf and data->req.upload_fromhere
- as well as data->req.upload_present
- set data->req.upload_done on having read all from
the client and completely flushed the send buffer
- tftp, remove setting of data->req.upload_fromhere
- serves no purpose as `upload_present` is not set
and the data itself is directly `sendto()` anyway
- smtp, make upload EOB conversion a client reader
- xfer_ulbuf addition
- add xfer_ulbuf for borrowing, similar to xfer_buf
- use in file upload
- use in c-hyper body sending
- h1-proxy, remove init of data->state.uilbuf that is never used
- smb, add own send_buf instead of using data->state.ulbuf
Closes#13010
This fixes miscellaneous typos and duplicated words in the docs, lib
and test comments and a few user facing errorstrings.
Author: RainRat on Github
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Dan Fandrich <dan@coneharvesters.com>
Closes: #13019
- replace `Curl_read()`, `Curl_write()` and `Curl_nwrite()` to
clarify when and at what level they operate
- send/recv of transfer related data is now done via
`Curl_xfer_send()/Curl_xfer_recv()` which no longer has
socket/socketindex as parameter. It decides on the transfer
setup of `conn->sockfd` and `conn->writesockfd` on which
connection filter chain to operate.
- send/recv on a specific connection filter chain is done via
`Curl_conn_send()/Curl_conn_recv()` which get the socket index
as parameter.
- rename `Curl_setup_transfer()` to `Curl_xfer_setup()` for
naming consistency
- clarify that the special CURLE_AGAIN hangling to return
`CURLE_OK` with length 0 only applies to `Curl_xfer_send()`
and CURLE_AGAIN is returned by all other send() variants.
- fix a bug in websocket `curl_ws_recv()` that mixed up data
when it arrived in more than a single chunk (to be made
into a sperate PR, also)
Added as documented [in
CLIENT-READER.md](5b1f31dfba/docs/CLIENT-READERS.md).
- old `Curl_buffer_send()` completely replaced by new `Curl_req_send()`
- old `Curl_fillreadbuffer()` replaced with `Curl_client_read()`
- HTTP chunked uploads are now formatted in a client reader added when
needed.
- FTP line-end conversions are done in a client reader added when
needed.
- when sending requests headers, remaining buffer space is filled with
body data for sending in "one go". This is independent of the request
body size. Resolves#12938 as now small and large requests have the
same code path.
Changes done to test cases:
- test513: now fails before sending request headers as this initial
"client read" triggers the setup fault. Behaves now the same as in
hyper build
- test547, test555, test1620: fix the length check in the lib code to
only fail for reads *smaller* than expected. This was a bug in the
test code that never triggered in the old implementation.
Closes#12969
- replace `Curl_read()`, `Curl_write()` and `Curl_nwrite()` to
clarify when and at what level they operate
- send/recv of transfer related data is now done via
`Curl_xfer_send()/Curl_xfer_recv()` which no longer has
socket/socketindex as parameter. It decides on the transfer
setup of `conn->sockfd` and `conn->writesockfd` on which
connection filter chain to operate.
- send/recv on a specific connection filter chain is done via
`Curl_conn_send()/Curl_conn_recv()` which get the socket index
as parameter.
- rename `Curl_setup_transfer()` to `Curl_xfer_setup()` for
naming consistency
- clarify that the special CURLE_AGAIN hangling to return
`CURLE_OK` with length 0 only applies to `Curl_xfer_send()`
and CURLE_AGAIN is returned by all other send() variants.
- fix a bug in websocket `curl_ws_recv()` that mixed up data
when it arrived in more than a single chunk
The method for sending not just raw bytes, but bytes that are either
"headers" or "body". The send abstraction stack, to to bottom, now is:
* `Curl_req_send()`: has parameter to indicate amount of header bytes,
buffers all data.
* `Curl_xfer_send()`: knows on which socket index to send, returns
amount of bytes sent.
* `Curl_conn_send()`: called with socket index, returns amount of bytes
sent.
In addition there is `Curl_req_flush()` for writing out all buffered
bytes.
`Curl_req_send()` is active for requests without body,
`Curl_buffer_send()` still being used for others. This is because the
special quirks need to be addressed in future parts:
* `expect-100` handling
* `Curl_fillreadbuffer()` needs to add directly to the new
`data->req.sendbuf`
* special body handlings, like `chunked` encodings and line end
conversions will be moved into something like a Client Reader.
In functions of the pattern `CURLcode xxx_send(..., ssize_t *written)`,
replace the `ssize_t` with a `size_t`. It makes no sense to allow for negative
values as the returned `CURLcode` already specifies error conditions. This
allows easier handling of lengths without casting.
Closes#12964
Curl_read/Curl_write clarifications
- replace `Curl_read()`, `Curl_write()` and `Curl_nwrite()` to 1clarify
when and at what level they operate
- send/recv of transfer related data is now done via
`Curl_xfer_send()/Curl_xfer_recv()` which no longer has
socket/socketindex as parameter. It decides on the transfer setup of
`conn->sockfd` and `conn->writesockfd` on which connection filter
chain to operate.
- send/recv on a specific connection filter chain is done via
`Curl_conn_send()/Curl_conn_recv()` which get the socket index as
parameter.
- rename `Curl_setup_transfer()` to `Curl_xfer_setup()` for naming
consistency
- clarify that the special CURLE_AGAIN handling to return `CURLE_OK`
with length 0 only applies to `Curl_xfer_send()` and CURLE_AGAIN is
returned by all other send() variants.
SingleRequest reshuffling
- move functions into request.[ch]
- differentiate between reset and free
- add Curl_req_done() to perform last actions
- add a send `bufq` to SingleRequest for future use in keeping upload data
Closes#12963
- from `conn->bits.authneg` to `data->req.authneg`
- this is a property of the request about to be made
and not a property of the connection
- in multiuse connections, transfer could step on each others
toes here potentially.
Closes#12949
- add a client writer that does "push" response
headers written to the client if the headers api
is enabled
- remove special handling in sendf.c
- needs to be installed very early on connection
setup to catch CONNECT response headers
Closes#12880
Remove curl_mimepart object from UserDefined structure when
CURL_DISABLE_MIME flag is active. Reduce size of UserDefined structure.
Also remove unreachable code: when CURL_DISABLE_MIME is set, httpreq can
never have HTTPREQ_POST_MIME value and the same goes for the
CURL_DISABLE_FORM_API flag and the HTTPREQ_POST_FORM value
Closes#12948
When checking if the user wants to replace the header, the check should
be case insensitive.
Adding test 461 to verify
Found-by: Dan Fandrich
Ref: #12782Closes#12784
This clarifies the handling of server responses by folding the code for
the complicated protocols into their protocol handlers. This concerns
mainly HTTP and its bastard sibling RTSP.
The terms "read" and "write" are often used without clear context if
they refer to the connect or the client/application side of a
transfer. This PR uses "read/write" for operations on the client side
and "send/receive" for the connection, e.g. server side. If this is
considered useful, we can revisit renaming of further methods in another
PR.
Curl's protocol handler `readwrite()` method been changed:
```diff
- CURLcode (*readwrite)(struct Curl_easy *data, struct connectdata *conn,
- const char *buf, size_t blen,
- size_t *pconsumed, bool *readmore);
+ CURLcode (*write_resp)(struct Curl_easy *data, const char *buf, size_t blen,
+ bool is_eos, bool *done);
```
The name was changed to clarify that this writes reponse data to the
client side. The parameter changes are:
* `conn` removed as it always operates on `data->conn`
* `pconsumed` removed as the method needs to handle all data on success
* `readmore` removed as no longer necessary
* `is_eos` as indicator that this is the last call for the transfer
response (end-of-stream).
* `done` TRUE on return iff the transfer response is to be treated as
finished
This change affects many files only because of updated comments in
handlers that provide no implementation. The real change is that the
HTTP protocol handlers now provide an implementation.
The HTTP protocol handlers `write_resp()` implementation will get passed
**all** raw data of a server response for the transfer. The HTTP/1.x
formatted status and headers, as well as the undecoded response
body. `Curl_http_write_resp_hds()` is used internally to parse the
response headers and pass them on. This method is public as the RTSP
protocol handler also uses it.
HTTP/1.1 "chunked" transport encoding is now part of the general
*content encoding* writer stack, just like other encodings. A new flag
`CLIENTWRITE_EOS` was added for the last client write. This allows
writers to verify that they are in a valid end state. The chunked
decoder will check if it indeed has seen the last chunk.
The general response handling in `transfer.c:466` happens in function
`readwrite_data()`. This mainly operates now like:
```
static CURLcode readwrite_data(data, ...)
{
do {
Curl_xfer_recv_resp(data, buf)
...
Curl_xfer_write_resp(data, buf)
...
} while(interested);
...
}
```
All the response data handling is implemented in
`Curl_xfer_write_resp()`. It calls the protocol handler's `write_resp()`
implementation if available, or does the default behaviour.
All raw response data needs to pass through this function. Which also
means that anyone in possession of such data may call
`Curl_xfer_write_resp()`.
Closes#12480
A new error code to be used when an internal field grows too large, like
when a dynbuf reaches its maximum. Previously it would return
CURLE_OUT_OF_MEMORY for this, which is highly misleading.
Ref: #12268Closes#12269
https://best.openssf.org/Compiler-Hardening-Guides/Compiler-Options-Hardening-Guide-for-C-and-C++.html
as of 2023-11-29 [1].
Enable new recommended warnings (except `-Wsign-conversion`):
- enable `-Wformat=2` for clang (in both cmake and autotools).
- add `CURL_PRINTF()` internal attribute and mark functions accepting
printf arguments with it. This is a copy of existing
`CURL_TEMP_PRINTF()` but using `__printf__` to make it compatible
with redefinting the `printf` symbol:
https://gcc.gnu.org/onlinedocs/gcc-3.0.4/gcc_5.html#SEC94
- fix `CURL_PRINTF()` and existing `CURL_TEMP_PRINTF()` for
mingw-w64 and enable it on this platform.
- enable `-Wimplicit-fallthrough`.
- enable `-Wtrampolines`.
- add `-Wsign-conversion` commented with a FIXME.
- cmake: enable `-pedantic-errors` the way we do it with autotools.
Follow-up to d5c0351055#2747
- lib/curl_trc.h: use `CURL_FORMAT()`, this also fixes it to enable format
checks. Previously it was always disabled due to the internal `printf`
macro.
Fix them:
- fix bug where an `set_ipv6_v6only()` call was missed in builds with
`--disable-verbose` / `CURL_DISABLE_VERBOSE_STRINGS=ON`.
- add internal `FALLTHROUGH()` macro.
- replace obsolete fall-through comments with `FALLTHROUGH()`.
- fix fallthrough markups: Delete redundant ones (showing up as
warnings in most cases). Add missing ones. Fix indentation.
- silence `-Wformat-nonliteral` warnings with llvm/clang.
- fix one `-Wformat-nonliteral` warning.
- fix new `-Wformat` and `-Wformat-security` warnings.
- fix `CURL_FORMAT_SOCKET_T` value for mingw-w64. Also move its
definition to `lib/curl_setup.h` allowing use in `tests/server`.
- lib: fix two wrongly passed string arguments in log outputs.
Co-authored-by: Jay Satiro
- fix new `-Wformat` warnings on mingw-w64.
[1] 56c0fde389/docs/Compiler-Hardening-Guides/Compiler-Options-Hardening-Guide-for-C-and-C%2B%2B.mdCloses#12489
Since the copy does not stop at a null byte, let's not call it anything
that makes you think it works like the common strndup() function.
Based on feedback from Jay Satiro, Stefan Eissing and Patrick Monnerat
Closes#12490
- bufref: use strndup
- cookie: use strndup
- formdata: use strndup
- ftp: use strndup
- gtls: use aprintf instead of malloc + strcpy * 2
- http: use strndup
- mbedtls: use strndup
- md4: use memdup
- ntlm: use memdup
- ntlm_sspi: use strndup
- pingpong: use memdup
- rtsp: use strndup instead of malloc, memcpy and null-terminate
- sectransp: use strndup
- socks_gssapi.c: use memdup
- vtls: use dynbuf instead of malloc, snprintf and memcpy
- vtls: use strdup instead of malloc + memcpy
- wolfssh: use strndup
Closes#12453
- add `SingleRequest->download_done` as indicator that
all download bytes have been received
- remove `stop_reading` bool from readwrite functions
- move excess body handling into client download writer
Closes#12371
- changed header/chunk/handler->readwrite prototypes to accept `buf`,
`blen` and a `pconsumed` pointer. They now get the buffer to work on
and report back how many bytes they consumed
- eliminated `k->str` in SingleRequest
- improved excess data handling to properly calculate with any body data
left in the headerb buffer
- eliminated `k->badheader` enum to only be a bool
Closes#12283
Fix compiler warnings in builds with disabled auths, NTLM and SPNEGO.
E.g. with `CURL_DISABLE_BASIC_AUTH` + `CURL_DISABLE_BEARER_AUTH` +
`CURL_DISABLE_DIGEST_AUTH` + `CURL_DISABLE_NEGOTIATE_AUTH` +
`CURL_DISABLE_NTLM` on non-Windows.
```
./curl/lib/http.c:737:12: warning: unused variable 'result' [-Wunused-variable]
CURLcode result = CURLE_OK;
^
./curl/lib/http.c:995:18: warning: variable 'availp' set but not used [-Wunused-but-set-variable]
unsigned long *availp;
^
./curl/lib/http.c:996:16: warning: variable 'authp' set but not used [-Wunused-but-set-variable]
struct auth *authp;
^
```
Regression from e92edfbef6#11490Fixes#12228Closes#12335
GCC 14 introduces a new -Walloc-size included in -Wextra which gives:
```
src/tool_operate.c: In function ‘add_per_transfer’:
src/tool_operate.c:213:5: warning: allocation of insufficient size ‘1’ for type ‘struct per_transfer’ with size ‘480’ [-Walloc-size]
213 | p = calloc(sizeof(struct per_transfer), 1);
| ^
src/var.c: In function ‘addvariable’:
src/var.c:361:5: warning: allocation of insufficient size ‘1’ for type ‘struct var’ with size ‘32’ [-Walloc-size]
361 | p = calloc(sizeof(struct var), 1);
| ^
```
The calloc prototype is:
```
void *calloc(size_t nmemb, size_t size);
```
So, just swap the number of members and size arguments to match the
prototype, as we're initialising 1 struct of size `sizeof(struct
...)`. GCC then sees we're not doing anything wrong.
Closes#12292