`CURLDEBUG` is meant to enable memory tracking, but in a bunch of cases,
it was protecting debug features that were supposed to be guarded with
`DEBUGBUILD`.
Replace these uses with `DEBUGBUILD`.
This leaves `CURLDEBUG` uses solely for its intended purpose: to enable
the memory tracking debug feature.
Also:
- autotools: rely on `DEBUGBUILD` to enable `checksrc`.
Instead of `CURLDEBUG`, which worked in most cases because debug
builds enable `CURLDEBUG` by default, but it's not accurate.
- include `lib/easyif.h` instead of keeping a copy of a declaration.
- add CI test jobs for the build issues discovered.
Ref: https://github.com/curl/curl/pull/13694#issuecomment-2120311894Closes#13718
Also make the user and password arguments mandatory, since all code
paths in libcurl used them anyway.
Adapted unit test case 1620 to the new rules.
Closes#13584
Before this patch `lib/curl_setup.h` defined these two macros right
next to each other, then the source code used them interchangeably.
After this patch, `USE_HTTP3` guards all HTTP/3 / QUIC features.
(Like `USE_HTTP2` does for HTTP/2.) `ENABLE_QUIC` is no longer used.
This patch doesn't change the way HTTP/3 is enabled via autotools
or CMake. Builders who enabled HTTP/3 manually by defining both of
these macros via `CPPFLAGS` can now delete `-DENABLE_QUIC`.
Closes#13352
Before this patch, two macros were used to guard IPv6 features in curl
sources: `ENABLE_IPV6` and `USE_IPV6`. This patch makes the source use
the latter for consistency with other similar switches.
`-DENABLE_IPV6` remains accepted for compatibility as a synonym for
`-DUSE_IPV6`, when passed to the compiler.
`ENABLE_IPV6` also remains the name of the CMake and `Makefile.vc`
options to control this feature.
Closes#13349
Reduced size of dynamically_allocated_data structure.
Reduced number of stored values in enum dupstring and enum dupblob. This
affects the reduced array placed in the UserDefined structure.
Closes#13188
The two options CURLOPT_PROXYUSERNAME and CURLOPT_PROXYPASSWORD set the
actual names as-is, not URL encoded.
Modified test 503 to use percent-encoded strings in the credential
strings that should be passed on as-is.
Reported-by: Sergey Ogryzkov
Fixes#13265Closes#13270
Move all handling of HTTP's `Expect: 100-continue` feature into a client
reader. Add sending flag `KEEP_SEND_TIMED` that triggers transfer
sending on general events like a timer.
HTTP installs a `CURL_CR_PROTOCOL` reader when announcing `Expect:
100-continue`. That reader works as follows:
- on first invocation, records time, starts the `EXPIRE_100_TIMEOUT`
timer, disables `KEEP_SEND`, enables `KEEP_SEND_TIMER` and returns 0,
eos=FALSE like a paused upload.
- on subsequent invocation it checks if the timer has expired. If so, it
enables `KEEP_SEND` and switches to passing through reads to the
underlying readers.
Transfer handling's `readwrite()` will be invoked when a timer expires
(like `EXPIRE_100_TIMEOUT`) or when data from the server arrives. Seeing
`KEEP_SEND_TIMER`, it will try to upload more data, which triggers
reading from the client readers again. Which then may lead to a new
pausing or cause the upload to start.
Flags and timestamps connected to this have been moved from
`SingleRequest` into the reader's context.
Closes#13110
A transfer may do several `SingleRequest`s for its success. This happens
regularly for authentication, follows and retries on failed connections.
The "readwrite()" calls and functions connected to those carried a `bool
*done` parameter to indicate that the current `SingleRequest` is over.
This may happen before `upload_done` or `download_done` bits of
`SingleRequest` are set.
The problem with that is now `write_resp()` protocol handlers are
invoked in places where the `bool *done` cannot be passed up to the
caller. Instead of being a bool in the call chain, it needs to become a
member of `SingleRequest`, reflecting its state.
This removes the `bool *done` parameter and adds the `done` bit to
`SingleRequest` instead. It adds `Curl_req_soft_reset()` for using a
`SingleRequest` in a follow up, clearing `done` and other
flags/counters.
Closes#13096
new struct ip_quadruple for holding local/remote addr+port
- used in data->info and conn and cf-socket.c
- copy back and forth complete struct
- add 'secondary' to conn
- use secondary in reporting success for ftp 2nd connection
Reported-by: DasKutti on github
Fixes#13084Closes#13090
- seek_func/seek_client, use transfer values only
- remove copies held in `struct connectdata`, use only
ever `data->set.seek_func`
- resolves possible issues in multiuse connections
- new mime post reader eliminates need to ever overwriting this
- websockets, remove empty Curl_ws_done() function
Closes#13079
- Move all the "upload_done" handling to request.c
- add possibility to abort sending of a request
- add `Curl_req_done_sending()` for checks
- transfer.c: readwrite_upload() now clean
- removing data->state.ulbuf and data->req.upload_fromhere
- as well as data->req.upload_present
- set data->req.upload_done on having read all from
the client and completely flushed the send buffer
- tftp, remove setting of data->req.upload_fromhere
- serves no purpose as `upload_present` is not set
and the data itself is directly `sendto()` anyway
- smtp, make upload EOB conversion a client reader
- xfer_ulbuf addition
- add xfer_ulbuf for borrowing, similar to xfer_buf
- use in file upload
- use in c-hyper body sending
- h1-proxy, remove init of data->state.uilbuf that is never used
- smb, add own send_buf instead of using data->state.ulbuf
Closes#13010
- replace `Curl_read()`, `Curl_write()` and `Curl_nwrite()` to
clarify when and at what level they operate
- send/recv of transfer related data is now done via
`Curl_xfer_send()/Curl_xfer_recv()` which no longer has
socket/socketindex as parameter. It decides on the transfer
setup of `conn->sockfd` and `conn->writesockfd` on which
connection filter chain to operate.
- send/recv on a specific connection filter chain is done via
`Curl_conn_send()/Curl_conn_recv()` which get the socket index
as parameter.
- rename `Curl_setup_transfer()` to `Curl_xfer_setup()` for
naming consistency
- clarify that the special CURLE_AGAIN hangling to return
`CURLE_OK` with length 0 only applies to `Curl_xfer_send()`
and CURLE_AGAIN is returned by all other send() variants.
- fix a bug in websocket `curl_ws_recv()` that mixed up data
when it arrived in more than a single chunk
The method for sending not just raw bytes, but bytes that are either
"headers" or "body". The send abstraction stack, to to bottom, now is:
* `Curl_req_send()`: has parameter to indicate amount of header bytes,
buffers all data.
* `Curl_xfer_send()`: knows on which socket index to send, returns
amount of bytes sent.
* `Curl_conn_send()`: called with socket index, returns amount of bytes
sent.
In addition there is `Curl_req_flush()` for writing out all buffered
bytes.
`Curl_req_send()` is active for requests without body,
`Curl_buffer_send()` still being used for others. This is because the
special quirks need to be addressed in future parts:
* `expect-100` handling
* `Curl_fillreadbuffer()` needs to add directly to the new
`data->req.sendbuf`
* special body handlings, like `chunked` encodings and line end
conversions will be moved into something like a Client Reader.
In functions of the pattern `CURLcode xxx_send(..., ssize_t *written)`,
replace the `ssize_t` with a `size_t`. It makes no sense to allow for negative
values as the returned `CURLcode` already specifies error conditions. This
allows easier handling of lengths without casting.
Closes#12964
Curl_read/Curl_write clarifications
- replace `Curl_read()`, `Curl_write()` and `Curl_nwrite()` to 1clarify
when and at what level they operate
- send/recv of transfer related data is now done via
`Curl_xfer_send()/Curl_xfer_recv()` which no longer has
socket/socketindex as parameter. It decides on the transfer setup of
`conn->sockfd` and `conn->writesockfd` on which connection filter
chain to operate.
- send/recv on a specific connection filter chain is done via
`Curl_conn_send()/Curl_conn_recv()` which get the socket index as
parameter.
- rename `Curl_setup_transfer()` to `Curl_xfer_setup()` for naming
consistency
- clarify that the special CURLE_AGAIN handling to return `CURLE_OK`
with length 0 only applies to `Curl_xfer_send()` and CURLE_AGAIN is
returned by all other send() variants.
SingleRequest reshuffling
- move functions into request.[ch]
- differentiate between reset and free
- add Curl_req_done() to perform last actions
- add a send `bufq` to SingleRequest for future use in keeping upload data
Closes#12963
- add a client writer that does "push" response
headers written to the client if the headers api
is enabled
- remove special handling in sendf.c
- needs to be installed very early on connection
setup to catch CONNECT response headers
Closes#12880
Remove curl_mimepart object from UserDefined structure when
CURL_DISABLE_MIME flag is active. Reduce size of UserDefined structure.
Also remove unreachable code: when CURL_DISABLE_MIME is set, httpreq can
never have HTTPREQ_POST_MIME value and the same goes for the
CURL_DISABLE_FORM_API flag and the HTTPREQ_POST_FORM value
Closes#12948
- can be borrowed by transfer during recv-write operation
- needs to be released before borrowing again
- adjustis size to `data->set.buffer_size`
- used in transfer.c readwrite_data()
Closes#12805
This clarifies the handling of server responses by folding the code for
the complicated protocols into their protocol handlers. This concerns
mainly HTTP and its bastard sibling RTSP.
The terms "read" and "write" are often used without clear context if
they refer to the connect or the client/application side of a
transfer. This PR uses "read/write" for operations on the client side
and "send/receive" for the connection, e.g. server side. If this is
considered useful, we can revisit renaming of further methods in another
PR.
Curl's protocol handler `readwrite()` method been changed:
```diff
- CURLcode (*readwrite)(struct Curl_easy *data, struct connectdata *conn,
- const char *buf, size_t blen,
- size_t *pconsumed, bool *readmore);
+ CURLcode (*write_resp)(struct Curl_easy *data, const char *buf, size_t blen,
+ bool is_eos, bool *done);
```
The name was changed to clarify that this writes reponse data to the
client side. The parameter changes are:
* `conn` removed as it always operates on `data->conn`
* `pconsumed` removed as the method needs to handle all data on success
* `readmore` removed as no longer necessary
* `is_eos` as indicator that this is the last call for the transfer
response (end-of-stream).
* `done` TRUE on return iff the transfer response is to be treated as
finished
This change affects many files only because of updated comments in
handlers that provide no implementation. The real change is that the
HTTP protocol handlers now provide an implementation.
The HTTP protocol handlers `write_resp()` implementation will get passed
**all** raw data of a server response for the transfer. The HTTP/1.x
formatted status and headers, as well as the undecoded response
body. `Curl_http_write_resp_hds()` is used internally to parse the
response headers and pass them on. This method is public as the RTSP
protocol handler also uses it.
HTTP/1.1 "chunked" transport encoding is now part of the general
*content encoding* writer stack, just like other encodings. A new flag
`CLIENTWRITE_EOS` was added for the last client write. This allows
writers to verify that they are in a valid end state. The chunked
decoder will check if it indeed has seen the last chunk.
The general response handling in `transfer.c:466` happens in function
`readwrite_data()`. This mainly operates now like:
```
static CURLcode readwrite_data(data, ...)
{
do {
Curl_xfer_recv_resp(data, buf)
...
Curl_xfer_write_resp(data, buf)
...
} while(interested);
...
}
```
All the response data handling is implemented in
`Curl_xfer_write_resp()`. It calls the protocol handler's `write_resp()`
implementation if available, or does the default behaviour.
All raw response data needs to pass through this function. Which also
means that anyone in possession of such data may call
`Curl_xfer_write_resp()`.
Closes#12480
To help users better understand where the URL (and denied scheme) comes
from. Also removed "in libcurl" from the message, since the disabling
can be done by the application.
The error message now says "not supported" or "disabled" depending on
why it was denied:
Protocol "hej" not supported
Protocol "http" disabled
And in redirects:
Protocol "hej" not supported (in redirect)
Protocol "http" disabled (in redirect)
Reported-by: Mauricio Scheffer
Fixes#12465Closes#12469
- add `SingleRequest->download_done` as indicator that
all download bytes have been received
- remove `stop_reading` bool from readwrite functions
- move excess body handling into client download writer
Closes#12371
Windows compilers define `_WIN32` automatically. Windows SDK headers
or build env defines `WIN32`, or we have to take care of it. The
agreement seems to be that `_WIN32` is the preferred practice here.
Make the source code rely on that to detect we're building for Windows.
Public `curl.h` was using `WIN32`, `__WIN32__` and `CURL_WIN32` for
Windows detection, next to the official `_WIN32`. After this patch it
only uses `_WIN32` for this. Also, make it stop defining `CURL_WIN32`.
There is a slight chance these break compatibility with Windows
compilers that fail to define `_WIN32`. I'm not aware of any obsolete
or modern compiler affected, but in case there is one, one possible
solution is to define this macro manually.
grepping for `WIN32` remains useful to discover Windows-specific code.
Also:
- extend `checksrc` to ensure we're not using `WIN32` anymore.
- apply minor formatting here and there.
- delete unnecessary checks for `!MSDOS` when `_WIN32` is present.
Co-authored-by: Jay Satiro
Reviewed-by: Daniel Stenberg
Closes#12376
- have common pattern of `if not match, continue`
- revert pages long if()s to return early
- move dead connection check to later since it may
be relatively expensive
- check multiuse also when NOT building with NGHTTP2
- for MULTIUSE bundles, verify that the inspected
connection indeed supports multiplexing when in use
(bundles may contain a mix of connection, afaict)
Closes#12373
Instead of a loop to scan over the potentially 30+ scheme names, this
uses a "perfect hash" table. This works fine because the set of schemes
is known and cannot change in a build. The hash algorithm and table size
is made to only make a single scheme index per table entry.
The perfect hash is generated by a separate tool (scripts/schemetable.c)
Closes#12347
Fixes:
```
./lib/url.c:178:56: warning: use of an empty initializer is a C2x extension [-Wc2x-extensions]
178 | static const struct Curl_handler * const protocols[] = {
| ^
./lib/url.c:178:56: warning: zero size arrays are an extension [-Wzero-length-array]
```
Closes#12344
Fixes:
```
./lib/url.c:456:35: error: no member named 'formp' in 'struct UrlState'
456 | Curl_mime_cleanpart(data->state.formp);
| ~~~~~~~~~~~ ^
```
Regression from 74b87a8af1#11682Closes#12343
1. Because the value is not strictly set with a setopt option.
2. Because otherwise when duping a handle when all the set.* fields are
first copied and an error happens (think out of memory mid-function),
the function would easily free the list *before* it was deep-copied,
which could lead to a double-free.
Closes#12323
To make it work properly with curl_easy_duphandle(). This, because
duphandle duplicates the entire 'UserDefined' struct by plain copy while
'hstslist' is a linked curl_list of file names. This would lead to a
double-free when the second of the two involved easy handles were
closed.
Closes#12315
- tunnel https proxy used for http: transfers does
no check if proxy-ssl configuration matches
- test cases added, test_10_12 fails on 8.4.0
Closes#12255
- perform connection cache matching against `data->set.ssl.primary`
and proxy counterpart
- fully clone connection ssl config only when connection is used
Closes#12237
- resolving is done for a connection, not for every transfer
- save create/dup/free of a cares channel for each transfer
- check values of setopt calls against a local channel if no
connection has been attached yet, when needed.
Closes#12198
- move definitions from content_encoding.h to sendf.h
- move create/cleanup/add code into sendf.c
- installed content_encoding writers will always be called
on Curl_client_write(CLIENTWRITE_BODY)
- Curl_client_cleanup() frees writers and tempbuffers from
paused transfers, irregardless of protocol
Closes#11908
- Fix netrc info message to use the generic ".netrc" filename if the
user did not specify a netrc location.
- Update --netrc doc to add that recent versions of curl on Windows
prefer .netrc over _netrc.
Before:
* Couldn't find host google.com in the (nil) file; using defaults
After:
* Couldn't find host google.com in the .netrc file; using defaults
Closes https://github.com/curl/curl/pull/11904