Manpages which document deprecated CURLOPT_ or CURLINFO_ are not
required to have an EXAMPLE section since they might effectively
be dead no-ops which we don't want to trick users into believing
they can use by copying example code.
Closes: #13540
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
This avoids the below compiler warning:
tftpd.c:280:1: warning: function 'timer' could be declared with
attribute 'noreturn' [-Wmissing-noreturn]
Closes: #13534
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Take advantage of the Curl_cipher_suite_walk_str() and
Curl_cipher_suite_get_str() functions introduced in commit fba9afeb.
This also fixes CURLOPT_SSL_CIPHER_LIST not working at all for bearssl
due to commit ff74cef5.
Closes#13464
- connect to DNS names with trailing dot
- connect to DNS names with double trailing dot
- rustls, always give `peer->hostname` and let it
figure out SNI itself
- add SNI tests for ip address and localhost
- document in code and TODO that QUIC with ngtcp2+wolfssl
does not do proper peer verification of the certificate
- mbedtls, skip tests with ip address verification as not
supported by the library
Closes#13486
- quiche: error transfers that try to receive on a closed
or draining connection
- ngtcp2: use callback for extending max bidi streams. This
allows more precise calculation of MAX_CONCURRENT as we
only can start a new stream when the server acknowledges
the close - not when we locally have closed it.
- remove a fprintf() from h2-download client to avoid excess
log files on tests timing out.
Closes#13475
- add session with destructor callback
- remove vtls `session_free` method
- let `Curl_ssl_addsessionid()` take ownership
of session object, freeing it also on failures
- change tls backend use
- test_17, add tests for SSL session resumption
Closes#13386
- ignore duplicate "chunked" transfer-encodings from
a server to accomodate for broken implementations
- add test1482 and test1483
Reported-by: Mel Zuser
Fixes#13451Closes#13461
Use a lookup list to set the cipher suites, allowing the
ciphers to be set by either openssl or IANA names.
To keep the binary size of the lookup list down we compress
each entry in the cipher list down to 2 + 6 bytes using the
C preprocessor.
Closes#13442
This fixes a regression of 75d79a4486. The
code in tool-operate truncated the etag save file, under the assumption
that the file would be written with a new etag value. However since
75d79a4486 that might not be the case
anymore and could result in the file being truncated when --etag-compare
and --etag-save was used and that the etag value matched with what the
server responded. Instead the truncation should not be done when a new
etag value should be written.
Test 3204 was added to verify that the file with the etag value doesn't
change the contents when used by --etag-compare and --etage-save and
that value matches with what the server returns on a non 2xx response.
Closes#13432
- errors returned by Curl_xfer_write_resp() and the header variant are
not errors in the protocol. The result needs to be returned on the
next recv() from the protocol filter.
- make xfer write errors for response data cause the stream to be
cancelled
- added pytest test_02_14 and test_02_15 to verify that also for
parallel processing
Reported-by: Laramie Leavitt
Fixes#13411Closes#13424
A connection that has seen an HTTP major version now refuses any other
major HTTP version in future responses. Previously, a HTTP/1.x
connection would just silently accept HTTP/2 or HTTP/3 in the status
lines as long as it had support for those built-in. It would then just
lead to confusion and badness.
Indirectly Spotted by CodeSonar which identified a duplicate assignment
in this function.
Add test 471 to verify
Closes#13421
By default the API inhibits empty queries and fragments extracted.
Unless this new flag is set.
This also makes the behavior more consistent: without it set, zero
length queries and fragments are considered not present in the URL. With
the flag set, they are returned as a zero length strings if they were in
fact present in the URL.
This applies when extracting the individual query and fragment
components and for the full URL.
Closes#13396
Using the URL API for a redirect URL when the redirected-to string
starts with a hash, ie is only a fragment, the API would produce the
wrong final URL.
Adjusted test 1560 to test for several new redirect cases.
Closes#13394
- add `Curl_hash_offt` as hashmap between a `curl_off_t` and
an object. Use this in h2+h3 connection filters to associate
`data->id` with the internal stream state.
- changed implementations of all affected connection filters
- removed `h2_ctx*` and `h3_ctx*` from `struct HTTP` and thus
the easy handle
- solves the problem of attaching "foreign protocol" easy handles
during connection shutdown
Test 1616 verifies the new hash functions.
Closes#13204
I implemented the IDN functions for macOS and iOS using Unicode
libraries coming with macOS and iOS.
Builds and runs here on macOS 14.2.1. Also verified to load and
run on older macOS version 10.13.
Build requires macOS SDK 13 or equivalent.
Set `-DUSE_APPLE_IDN=ON` CMake option to enable it.
With autotools and other build tools, set these manual options:
```
CPPFLAGS=-DUSE_APPLE_IDN
LIBS=-licucore
```
Completes TODO 1.6.
TODO: add autotools option and feature-detection.
Refs: #5330#5371
Co-authored-by: Viktor Szakats
Closes#13246
- fix flow handling in ngtcp2 to ACK data on streams
we abort ourself.
- extend test_02_23* cases to also run for h3
- skip test_02_23* for OpenSSL QUIC as it gets stalled
on progressing the connection
Closes#13374
To reduce the risk that the user running the tests has a .curlrc present
that messes things up.
Support 'option="no-q"' for the <command> tag to switch it off on demand.
Use this new feature in test 433 and 436.
Ref: #13284Closes#13387
- remember error encountered in invoking write callback and always fail
afterwards without further invokes
- check behaviour in test_02_17 with h2-pausing client
Reported-by: Pavel Kropachev
Fixes#13337Closes#13340
Before this patch, two macros were used to guard IPv6 features in curl
sources: `ENABLE_IPV6` and `USE_IPV6`. This patch makes the source use
the latter for consistency with other similar switches.
`-DENABLE_IPV6` remains accepted for compatibility as a synonym for
`-DUSE_IPV6`, when passed to the compiler.
`ENABLE_IPV6` also remains the name of the CMake and `Makefile.vc`
options to control this feature.
Closes#13349
- h2-download now always opens the output file on first write callback
invocation, if it will pause the transfer or not.
- Checks on output files then does not depend on the amount of data curl
has collected for the first write.
Closes#13323
- When the writing of response data fails, reset the stream
and do not return a callback error to nghttp2. That would
be a fatal error for the connection and harm other requests.
- add test cases for various abort scenarios
Reported-by: Konstantin Kuzov
Fixes#13292Closes#13298
... no need to use an absolute path, that makes the build unncessarily
fail if invoked using a different mount point. managen now takes options
to find the input files.
Update test1478 to provide the dir arguments to managen
Closes#13281
A transfer with a completed download that is still uploading needs to
check the connection state when it is PAUSEd, since connection
close/errors would otherwise go unnoticed.
Reported-by: Sergey Bronnikov
Fixes#13260Closes#13271
The two options CURLOPT_PROXYUSERNAME and CURLOPT_PROXYPASSWORD set the
actual names as-is, not URL encoded.
Modified test 503 to use percent-encoded strings in the credential
strings that should be passed on as-is.
Reported-by: Sergey Ogryzkov
Fixes#13265Closes#13270
- curl's transfer handling may write 0-length chunks at the end of the
download with an EOS flag. (HTTP/2 does this commonly)
- content encoders need to pass-through such a write and not count this
as error in case they are finished decoding
Fixes#13209Fixes#13212Closes#13219
Move all handling of HTTP's `Expect: 100-continue` feature into a client
reader. Add sending flag `KEEP_SEND_TIMED` that triggers transfer
sending on general events like a timer.
HTTP installs a `CURL_CR_PROTOCOL` reader when announcing `Expect:
100-continue`. That reader works as follows:
- on first invocation, records time, starts the `EXPIRE_100_TIMEOUT`
timer, disables `KEEP_SEND`, enables `KEEP_SEND_TIMER` and returns 0,
eos=FALSE like a paused upload.
- on subsequent invocation it checks if the timer has expired. If so, it
enables `KEEP_SEND` and switches to passing through reads to the
underlying readers.
Transfer handling's `readwrite()` will be invoked when a timer expires
(like `EXPIRE_100_TIMEOUT`) or when data from the server arrives. Seeing
`KEEP_SEND_TIMER`, it will try to upload more data, which triggers
reading from the client readers again. Which then may lead to a new
pausing or cause the upload to start.
Flags and timestamps connected to this have been moved from
`SingleRequest` into the reader's context.
Closes#13110
The option is really two enums ORed together, so it needs special
attention to make the code output nice.
Added test 1481 to verify. Both the server and the proxy versions.
Reported-by: Boris Verkhovskiy
Fixes#13127Closes#13129
Use imperative mood consistently for the first sentence describing an
option.
"Set this" instead "tell curl to set" or "this sets..."
Plus some extra cleanups and rephrasing.
Closes#13106
... correctly, even when they follow an existing one without a space in
between.
Verify with test 467
Follow-up to 07dd60c05b
Reported-by: Geeknik Labs
Fixes#13101Closes#13102
... in its new build path.
Also update the test scripts to be more precise in error messages to
help us understand CI errors better.
Follow-up to f03c85635f
Ref: #13029Closes#13083
Create ASCII version of manpage without nroff
- build src/tool_hugegelp.c from the ascii manpage
- move the the manpage and the ascii version build to docs/cmdline-opts
- remove all use of nroff from the build process
- should make the build entirely reproducible (by avoiding nroff)
- partly reverts 2620aa9 to build libcurl option man pages one by one
in cmake because the appveyor builds got all crazy until I did
The ASCII version of the manpage
- is built with gen.pl, just like the manpage is
- has a right-justified column making the appearance similar to the previous
version
- uses a 4-space indent per level (instead of the old version's 7)
- does not do hyphenation of words (which nroff does)
History
We first made the curl build use nroff for building the hugehelp file in
December 1998, for curl 5.2.
Closes#13047
If a response without a status line is received, and the connection is
known to use HTTP/1.x (not HTTP/0.9), report the error "Invalid status
line" instead of "Received HTTP/0.9 when not allowed".
Closes#13045
- pytest has changed the signature of the hook pytest_report_header()
for some obscure reason and that change landed in our CI now
- remove the changed param that we never used anyway
Closes#13037
This fixes miscellaneous typos and duplicated words in the docs, lib
and test comments and a few user facing errorstrings.
Author: RainRat on Github
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Dan Fandrich <dan@coneharvesters.com>
Closes: #13019
- replace `Curl_read()`, `Curl_write()` and `Curl_nwrite()` to
clarify when and at what level they operate
- send/recv of transfer related data is now done via
`Curl_xfer_send()/Curl_xfer_recv()` which no longer has
socket/socketindex as parameter. It decides on the transfer
setup of `conn->sockfd` and `conn->writesockfd` on which
connection filter chain to operate.
- send/recv on a specific connection filter chain is done via
`Curl_conn_send()/Curl_conn_recv()` which get the socket index
as parameter.
- rename `Curl_setup_transfer()` to `Curl_xfer_setup()` for
naming consistency
- clarify that the special CURLE_AGAIN hangling to return
`CURLE_OK` with length 0 only applies to `Curl_xfer_send()`
and CURLE_AGAIN is returned by all other send() variants.
- fix a bug in websocket `curl_ws_recv()` that mixed up data
when it arrived in more than a single chunk (to be made
into a sperate PR, also)
Added as documented [in
CLIENT-READER.md](5b1f31dfba/docs/CLIENT-READERS.md).
- old `Curl_buffer_send()` completely replaced by new `Curl_req_send()`
- old `Curl_fillreadbuffer()` replaced with `Curl_client_read()`
- HTTP chunked uploads are now formatted in a client reader added when
needed.
- FTP line-end conversions are done in a client reader added when
needed.
- when sending requests headers, remaining buffer space is filled with
body data for sending in "one go". This is independent of the request
body size. Resolves#12938 as now small and large requests have the
same code path.
Changes done to test cases:
- test513: now fails before sending request headers as this initial
"client read" triggers the setup fault. Behaves now the same as in
hyper build
- test547, test555, test1620: fix the length check in the lib code to
only fail for reads *smaller* than expected. This was a bug in the
test code that never triggered in the old implementation.
Closes#12969
When disabling all protocols without enabling any, the resulting
set of allowed protocols remained the default set. Clearing the
allowed set before inspecting the passed value from --proto make
the set empty even in the errorpath of no protocols enabled.
Co-authored-by: Dan Fandrich <dan@telarity.com>
Reported-by: Dan Fandrich <dan@telarity.com>
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Closes: #13004
Curl_read/Curl_write clarifications
- replace `Curl_read()`, `Curl_write()` and `Curl_nwrite()` to 1clarify
when and at what level they operate
- send/recv of transfer related data is now done via
`Curl_xfer_send()/Curl_xfer_recv()` which no longer has
socket/socketindex as parameter. It decides on the transfer setup of
`conn->sockfd` and `conn->writesockfd` on which connection filter
chain to operate.
- send/recv on a specific connection filter chain is done via
`Curl_conn_send()/Curl_conn_recv()` which get the socket index as
parameter.
- rename `Curl_setup_transfer()` to `Curl_xfer_setup()` for naming
consistency
- clarify that the special CURLE_AGAIN handling to return `CURLE_OK`
with length 0 only applies to `Curl_xfer_send()` and CURLE_AGAIN is
returned by all other send() variants.
SingleRequest reshuffling
- move functions into request.[ch]
- differentiate between reset and free
- add Curl_req_done() to perform last actions
- add a send `bufq` to SingleRequest for future use in keeping upload data
Closes#12963
Returns 1 if the previous transfer used a proxy, otherwise 0. Useful to
for example determine if a `NOPROXY` pattern matched the hostname or
not.
Extended test 970 and 972
- when data arrived in several chunks, the collection into
the passed buffer always started at offset 0, overwriting
the data already there.
adding test_20_07 to verify fix
- debug environment var CURL_WS_CHUNK_SIZE can be used to
influence the buffer chunk size used for en-/decoding.
Closes#12945
Also fix the tests. New implementation tested with GNU libmicrohttpd.
The new numbers in tests are real SHA-512/256 numbers (not just some
random ;) numbers ).
The previous realloc code in this code could trigger a compiler warning,
but since that code path cannot happen in normal circumstances it now
instead exits with an error message there.
Ref: #12887Closes#12890
- add test_05_04 for requests using http/1.0, http/1.1 and h2 against an
Apache resource that does an unclean TLS shutdown.
- revert special workarund in openssl.c for suppressing shutdown errors
on multiplexed connections
- vlts.c restore to its state before 9a90c9dd64Fixes#12885Fixes#12844Closes#12848
- test450: remove --config from the keywords
- test2080: change return code
- test428: add --config as a keyword
- test428: disable on Windows due to CI problems
Like when trying to import an environment variable that does not exist.
Also fix a bug for reading env variables when there is a default value
set.
Bug: https://curl.se/mail/archive-2024-02/0008.html
Reported-by: Brett Buddin
Add test 462 to verify.
Closes#12862
Create the line in a dynbuf. Aborts the reading of the file on
errors. Avoids having to always allocate maximum amount from the
start. Avoids direct malloc.
Closes#12846
- improve info logging when peer verification fails to indicate
if DNS name or ip address has been tried to match
- add test case for contacting https proxy with ip address
- add pytest env check on loaded credentials and re-issue
when they are no longer valid
- disable proxy ip address test for bearssl, since not supported there
Ref: #12831Closes#12838
and mark the stream for close, but return OK since the response this far
was ok - if headers were received. Partly because this is what curl has
done traditionally.
Test 499 verifies. Updates test 689.
Reported-by: Sergey Bronnikov
Bug: https://curl.se/mail/lib-2024-02/0000.htmlCloses#12842
- delete redundant warning suppressions for `-Wformat-nonliteral`.
This now relies on `CURL_PRINTF()` and it's theoratically possible
that this macro isn't active but the warning is. We're ignoring this
as a corner-case here.
- replace two pragmas with code changes to avoid the warnings.
Follow-up to aee4ebe591#12803
Follow-up to 0923012758#12540
Follow-up to 3829759bd0#12489
Reviewed-by: Daniel Stenberg
Closes#12812
- Use http authentication mechanisms as a default, not a preset.
Consider http authentication options which are mapped to SASL options as
a default (overriding the hardcoded default mask for the protocol) that
is ignored if a login option string is given.
Prior to this change, if some HTTP auth options were given, sasl mapped
http authentication options to sasl ones but merged them with the login
options.
That caused problems with the cli tool that sets the http login option
CURLAUTH_BEARER as a side-effect of --oauth2-bearer, because this flag
maps to more than one sasl mechanisms and the latter cannot be cleared
individually by the login options string.
New test 992 checks this.
Fixes https://github.com/curl/curl/issues/10259
Closes https://github.com/curl/curl/pull/12790
When checking if the user wants to replace the header, the check should
be case insensitive.
Adding test 461 to verify
Found-by: Dan Fandrich
Ref: #12782Closes#12784
The pingpong logic now uses its own dynbuf for receiving command
response data.
When the "final" response header for a commanad has been received, that
final line is left first in the recvbuf for the protocols to parse at
will. If there is additional data behind the final response line, the
'overflow' counter is indicate how many bytes.
Closes#12757
- switch all invidual files documenting command line options into .md,
as the documentation is now markdown-looking.
- made the parser treat 4-space indents as quotes
- switch to building the curl.1 manpage using the "mainpage.idx" file,
which lists the files to include to generate it, instead of using the
previous page-footer/headers. Also, those files are now also .md
ones, using the same format. I gave them underscore prefixes to make
them sort separately:
_NAME.md, _SYNOPSIS.md, _DESCRIPTION.md, _URL.md, _GLOBBING.md,
_VARIABLES.md, _OUTPUT.md, _PROTOCOLS.md, _PROGRESS.md, _VERSION.md,
_OPTIONS.md, _FILES.md, _ENVIRONMENT.md, _PROXYPREFIX.md,
_EXITCODES.md, _BUGS.md, _AUTHORS.md, _WWW.md, _SEEALSO.md
- updated test cases accordingly
Closes#12751
curldown is this new file format for libcurl man pages. It is markdown
inspired with differences:
- Each file has a set of leading headers with meta-data
- Supports a small subset of markdown
- Uses .md file extensions for editors/IDE/GitHub to treat them nicely
- Generates man pages very similar to the previous ones
- Generates man pages that still convert nicely to HTML on the website
- Detects and highlights mentions of curl symbols automatically (when
their man page section is specified)
tools:
- cd2nroff: converts from curldown to nroff man page
- nroff2cd: convert an (old) nroff man page to curldown
- cdall: convert many nroff pages to curldown versions
- cd2cd: verifies and updates a curldown to latest curldown
This setup generates .3 versions of all the curldown versions at build time.
CI:
Since the documentation is now technically markdown in the eyes of many
things, the CI runs many more tests and checks on this documentation,
including proselint, link checkers and tests that make sure we capitalize the
first letter after a period...
Closes#12730
- HTTP/3 for curl using OpenSSL's own QUIC stack together
with nghttp3
- configure with `--with-openssl-quic` to enable curl to
build this. This requires the nghttp3 library
- implementation with the following restrictions:
* macOS has to use an unconnected UDP socket due to an
issue in OpenSSL's datagram implementation
See https://github.com/openssl/openssl/issues/23251
This makes connections to non-reponsive servers hang.
* GET requests will send the indicator that they have
no body in a separate QUIC packet. This may result
in processing delays or Transfer-Encodings on proxied
requests
* uploads that encounter blocks will use 100% cpu as
detection of these flow control issue is not working
(we have not figured out to pry that from OpenSSL).
Closes#12734
- in en- and decoding, check the websocket frame payload lengths for
negative values (from curl_off_t) and error the operation in that case
- add test 2307 to verify
Closes#12707
- enforce a response body length of 0, if the
response has no Content-lenght. This is according
to the RTSP spec.
- excess bytes in a response body are forwarded to
the client writers which will report and fail the
transfer
Follow-up to d7b6ce6Fixes#12701Closes#12706
This clarifies the handling of server responses by folding the code for
the complicated protocols into their protocol handlers. This concerns
mainly HTTP and its bastard sibling RTSP.
The terms "read" and "write" are often used without clear context if
they refer to the connect or the client/application side of a
transfer. This PR uses "read/write" for operations on the client side
and "send/receive" for the connection, e.g. server side. If this is
considered useful, we can revisit renaming of further methods in another
PR.
Curl's protocol handler `readwrite()` method been changed:
```diff
- CURLcode (*readwrite)(struct Curl_easy *data, struct connectdata *conn,
- const char *buf, size_t blen,
- size_t *pconsumed, bool *readmore);
+ CURLcode (*write_resp)(struct Curl_easy *data, const char *buf, size_t blen,
+ bool is_eos, bool *done);
```
The name was changed to clarify that this writes reponse data to the
client side. The parameter changes are:
* `conn` removed as it always operates on `data->conn`
* `pconsumed` removed as the method needs to handle all data on success
* `readmore` removed as no longer necessary
* `is_eos` as indicator that this is the last call for the transfer
response (end-of-stream).
* `done` TRUE on return iff the transfer response is to be treated as
finished
This change affects many files only because of updated comments in
handlers that provide no implementation. The real change is that the
HTTP protocol handlers now provide an implementation.
The HTTP protocol handlers `write_resp()` implementation will get passed
**all** raw data of a server response for the transfer. The HTTP/1.x
formatted status and headers, as well as the undecoded response
body. `Curl_http_write_resp_hds()` is used internally to parse the
response headers and pass them on. This method is public as the RTSP
protocol handler also uses it.
HTTP/1.1 "chunked" transport encoding is now part of the general
*content encoding* writer stack, just like other encodings. A new flag
`CLIENTWRITE_EOS` was added for the last client write. This allows
writers to verify that they are in a valid end state. The chunked
decoder will check if it indeed has seen the last chunk.
The general response handling in `transfer.c:466` happens in function
`readwrite_data()`. This mainly operates now like:
```
static CURLcode readwrite_data(data, ...)
{
do {
Curl_xfer_recv_resp(data, buf)
...
Curl_xfer_write_resp(data, buf)
...
} while(interested);
...
}
```
All the response data handling is implemented in
`Curl_xfer_write_resp()`. It calls the protocol handler's `write_resp()`
implementation if available, or does the default behaviour.
All raw response data needs to pass through this function. Which also
means that anyone in possession of such data may call
`Curl_xfer_write_resp()`.
Closes#12480
- the option names are now alpha sorted and lookup is a lot faster
- use case sensitive matching. It was previously case insensitive, but that
was not documented nor tested.
- remove "partial match" feature. It was not documented, not tested and
was always fragile as existing use could break when we add a new
option
- lookup short options via a table
Closes#12631
When Content-Disposition parsing is used and an output dir is prepended,
make sure to store that new file name correctly so that it can be used
for setting the file timestamp when --remote-time is used.
Extended test 3012 to verify.
Co-Authored-by: Jay Satiro
Reported-by: hgdagon on github
Fixes#12614Closes#12617
- add test cases for rate limiting uploads for all
http versions
- fix transfer loop handling of limits. Signal a re-receive
attempt only on exhausting maxloops without an EAGAIN
- fix `data->state.selectbits` forcing re-receive to also
set re-sending when transfer is doing this.
Reported-by: Karthikdasari0423 on github
Fixes#12559Closes#12586
- there seems to be a code path that cleans up easy handles without
triggering DONE or DETACH events to the connection filters. This
would explain wh nghttp2 still holds stream user data
- add GOOD check to easy handle used in on_close_callback to
prevent crashes, ASSERTs in debug builds.
- NULL the stream user data early before submitting RST
- add checks in on_stream_close() to identify UNGOOD easy handles
Reported-by: Hans-Christian Egtvedt
Fixes#10936Closes#12562
In a test case using lots of snprintf() calls using many commonly used
%-codes per call, this version is around 30% faster than previous
version.
It also fixes the #12561 bug which made it not behave correctly when
given unknown %-sequences. Fixing that flaw required a different take on
the problem, which resulted in the new two-arrays model.
lib557: extended - Verify the #12561 fix and test more printf features
unit1398: fix test: It used a <num>$ only for one argument, which is not
supported.
Fixes#12561Closes#12563
- enable `-Wsign-conversion` warnings, but also setting them to not
raise errors.
- fix `-Warith-conversion` warnings seen in CI.
These are triggered by `-Wsign-converion` and causing errors unless
explicitly silenced. It makes more sense to fix them, there just a few
of them.
- fix some `-Wsign-conversion` warnings.
- hide `-Wsign-conversion` warnings with a `#pragma`.
- add macro `CURL_WARN_SIGN_CONVERSION` to unhide them on a per-build
basis.
- update a CI job to unhide them with the above macro:
https://github.com/curl/curl/actions/workflows/linux.yml -> OpenSSL -O3
Closes#12492
`winsock2.h` pulls in `windows.h`. `ws2tcpip.h` pulls in `winsock2.h`.
`winsock2.h` and `ws2tcpip.h` are also pulled by `curl/curl.h`.
Keep only those headers that are not already included, or the code under
it uses something from that specific header.
Closes#12539
Follow-up to 63b5748
Invokes the test case via lldb instead of gdb. Since using gdb is such a
pain on mac, using lldb is sometimes less quirky.
Closes#12547
A new error code to be used when an internal field grows too large, like
when a dynbuf reaches its maximum. Previously it would return
CURLE_OUT_OF_MEMORY for this, which is highly misleading.
Ref: #12268Closes#12269
When running on termux, where $TMPDIR isn't /tmp, running the tests
failed, since the server config tried creating sockets in /tmp, without
checking the temp dir config. Use the TMPDIR variable that makes it find
the correct directory everywhere [0]
[0] https://perldoc.perl.org/File::Temp#tempfileCloses#12545
https://best.openssf.org/Compiler-Hardening-Guides/Compiler-Options-Hardening-Guide-for-C-and-C++.html
as of 2023-11-29 [1].
Enable new recommended warnings (except `-Wsign-conversion`):
- enable `-Wformat=2` for clang (in both cmake and autotools).
- add `CURL_PRINTF()` internal attribute and mark functions accepting
printf arguments with it. This is a copy of existing
`CURL_TEMP_PRINTF()` but using `__printf__` to make it compatible
with redefinting the `printf` symbol:
https://gcc.gnu.org/onlinedocs/gcc-3.0.4/gcc_5.html#SEC94
- fix `CURL_PRINTF()` and existing `CURL_TEMP_PRINTF()` for
mingw-w64 and enable it on this platform.
- enable `-Wimplicit-fallthrough`.
- enable `-Wtrampolines`.
- add `-Wsign-conversion` commented with a FIXME.
- cmake: enable `-pedantic-errors` the way we do it with autotools.
Follow-up to d5c0351055#2747
- lib/curl_trc.h: use `CURL_FORMAT()`, this also fixes it to enable format
checks. Previously it was always disabled due to the internal `printf`
macro.
Fix them:
- fix bug where an `set_ipv6_v6only()` call was missed in builds with
`--disable-verbose` / `CURL_DISABLE_VERBOSE_STRINGS=ON`.
- add internal `FALLTHROUGH()` macro.
- replace obsolete fall-through comments with `FALLTHROUGH()`.
- fix fallthrough markups: Delete redundant ones (showing up as
warnings in most cases). Add missing ones. Fix indentation.
- silence `-Wformat-nonliteral` warnings with llvm/clang.
- fix one `-Wformat-nonliteral` warning.
- fix new `-Wformat` and `-Wformat-security` warnings.
- fix `CURL_FORMAT_SOCKET_T` value for mingw-w64. Also move its
definition to `lib/curl_setup.h` allowing use in `tests/server`.
- lib: fix two wrongly passed string arguments in log outputs.
Co-authored-by: Jay Satiro
- fix new `-Wformat` warnings on mingw-w64.
[1] 56c0fde389/docs/Compiler-Hardening-Guides/Compiler-Options-Hardening-Guide-for-C-and-C%2B%2B.mdCloses#12489
It is hard to name the scripts sensibly. Lots of them are similarly
named and the name did not tell which test that used them.
The new approach is rather to name them based on the test number that
runs them. Also helps us see which scripts are for individual tests
rather than for general test infra.
- badsymbols.pl -> test1167.pl
- check-deprecated.pl -> test1222.pl
- check-translatable-options.pl -> test1544.pl
- disable-scan.pl -> test1165.pl
- error-codes.pl -> test1175.pl
- errorcodes.pl -> test1477.pl
- extern-scan.pl -> test1135.pl
- manpage-scan.pl -> test1139.pl
- manpage-syntax.pl -> test1173.pl
- markdown-uppercase.pl -> test1275.pl
- mem-include-scan.pl -> test1132.pl
- nroff-scan.pl -> test1140.pl
- option-check.pl -> test1276.pl
- options-scan.pl -> test971.pl
- symbol-scan.pl -> test1119.pl
- version-scan.pl -> test1177.pl
Closes#12487
To help users better understand where the URL (and denied scheme) comes
from. Also removed "in libcurl" from the message, since the disabling
can be done by the application.
The error message now says "not supported" or "disabled" depending on
why it was denied:
Protocol "hej" not supported
Protocol "http" disabled
And in redirects:
Protocol "hej" not supported (in redirect)
Protocol "http" disabled (in redirect)
Reported-by: Mauricio Scheffer
Fixes#12465Closes#12469
char variables if unspecified can be either signed or unsigned depending
on the platform according to the C standard; in most platforms, they are
signed.
This meant that the *i<32 waas always true for bytes with the top bit
set. So they were always getting encoded as \uXXXX, and then since they
were also signed negative, they were getting extended with 1s causing
'\xe2' to be expanded to \uffffffe2, for example:
$ curl --variable 'v=“' --expand-write-out '{{v:json}}\n' file:///dev/null
\uffffffe2\uffffff80\uffffff9c
I fixed this bug by making the code use explicitly unsigned char*
variables instead of char* variables.
Test 268 verifies
Reported-by: iconoclasthero
Closes#12434
The script errorcodes.pl extracts all error codes from all headers and
checks that they are all documented, then checks that all documented
error codes are also specified in a header file.
Closes#12424
When the config file parser detects a word that *probably* should be
quoted, mention double-quotes as a possible remedy.
Test 459 verifies.
Proposed-by: Jiehong on github
Fixes#12409Closes#12412
Use the closure handle for disconnecting connection cache entries so
that anything that happens during the disconnect is not stored and
associated with the 'data' handle which already just finished a transfer
and it is important that details from the unrelated disconnect does not
taint meta-data in the data handle.
Like storing the response code.
This also adjust test 1506. Unfortunately it also removes a key part of
the test that verifies that a connection is closed since when this
output vanishes (because the closure handle is used), we don't know
exactly that the connection actually gets closed in this test...
Reported-by: ohyeaah on github
Fixes#12367Closes#12405
Windows compilers define `_WIN32` automatically. Windows SDK headers
or build env defines `WIN32`, or we have to take care of it. The
agreement seems to be that `_WIN32` is the preferred practice here.
Make the source code rely on that to detect we're building for Windows.
Public `curl.h` was using `WIN32`, `__WIN32__` and `CURL_WIN32` for
Windows detection, next to the official `_WIN32`. After this patch it
only uses `_WIN32` for this. Also, make it stop defining `CURL_WIN32`.
There is a slight chance these break compatibility with Windows
compilers that fail to define `_WIN32`. I'm not aware of any obsolete
or modern compiler affected, but in case there is one, one possible
solution is to define this macro manually.
grepping for `WIN32` remains useful to discover Windows-specific code.
Also:
- extend `checksrc` to ensure we're not using `WIN32` anymore.
- apply minor formatting here and there.
- delete unnecessary checks for `!MSDOS` when `_WIN32` is present.
Co-authored-by: Jay Satiro
Reviewed-by: Daniel Stenberg
Closes#12376
Enable more picky compiler warnings. I've found these options in the
nghttp3 project when implementing the CMake quick picky warning
functionality for it [1].
`-Wunused-macros` was too noisy to keep around, but fixed a few issues
it revealed while testing.
- autotools: reflect the more precisely-versioned clang warnings.
Follow-up to 033f8e2a08#12324
- autotools: sync between clang and gcc the way we set `no-multichar`.
- autotools: avoid setting `-Wstrict-aliasing=3` twice.
- autotools: disable `-Wmissing-noreturn` for MSYS gcc targets [2].
It triggers in libtool-generated stub code.
- lib/timeval: delete a redundant `!MSDOS` guard from a `WIN32` branch.
- lib/curl_setup.h: delete duplicate declaration for `fileno`.
Added in initial commit ae1912cb0d
(1999-12-29). This suggests this may not be needed anymore, but if
it does, we may restore this for those specific (non-Windows) systems.
- lib: delete unused macro `FTP_BUFFER_ALLOCSIZE` since
c1d6fe2aaa.
- lib: delete unused macro `isxdigit_ascii` since
f65f750742.
- lib/mqtt: delete unused macro `MQTT_HEADER_LEN`.
- lib/multi: delete unused macro `SH_READ`/`SH_WRITE`.
- lib/hostip: add `noreturn` function attribute via new `CURL_NORETURN`
macro.
- lib/mprintf: delete duplicate declaration for `Curl_dyn_vprintf`.
- lib/rand: fix `-Wunreachable-code` and related fallouts [3].
- lib/setopt: fix `-Wunreachable-code-break`.
- lib/system_win32 and lib/timeval: fix double declarations for
`Curl_freq` and `Curl_isVistaOrGreater` in CMake UNITY mode [4].
- lib/warnless: fix double declarations in CMake UNITY mode [5].
This was due to force-disabling the header guard of `warnless.h` to
to reapply it to source code coming after `warnless.c` in UNITY
builds. This reapplied declarations too, causing the warnings.
Solved by adding a header guard for the lines that actually need
to be reapplied.
- lib/vauth/digest: fix `-Wunreachable-code-break` [6].
- lib/vssh/libssh2: fix `-Wunreachable-code-break` and delete redundant
block.
- lib/vtls/sectransp: fix `-Wunreachable-code-break` [7].
- lib/vtls/sectransp: suppress `-Wunreachable-code`.
Detected in `else` branches of dynamic feature checks, with results
known at compile-time, e.g.
```c
if(SecCertificateCopySubjectSummary) /* -> true */
```
Likely fixable as a separate micro-project, but given SecureTransport
is deprecated anyway, let's just silence these locally.
- src/tool_help: delete duplicate declaration for `helptext`.
- src/tool_xattr: fix `-Wunreachable-code`.
- tests: delete duplicate declaration for `unitfail` [8].
- tests: delete duplicate declaration for `strncasecompare`.
- tests/libtest: delete duplicate declaration for `gethostname`.
Originally added in 687df5c8c3
(2010-08-02).
Got complicated later: c49e9683b8
If there are still systems around with warnings, we may restore the
prototype, but limited for those systems.
- tests/lib2305: delete duplicate declaration for
`libtest_debug_config`.
- tests/h2-download: fix `-Wunreachable-code-break`.
[1] a70edb08e9/cmake/PickyWarningsC.cmake
[2] https://ci.appveyor.com/project/curlorg/curl/builds/48553586/job/3qkgjauiqla5fj45?fullLog=true#L1675
[3] https://github.com/curl/curl/actions/runs/6880886309/job/18716044703?pr=12331#step:7:72https://github.com/curl/curl/actions/runs/6883016087/job/18722707368?pr=12331#step:7:109
[4] https://ci.appveyor.com/project/curlorg/curl/builds/48555101/job/9g15qkrriklpf1ut#L204
[5] https://ci.appveyor.com/project/curlorg/curl/builds/48555101/job/9g15qkrriklpf1ut#L218
[6] https://github.com/curl/curl/actions/runs/6880886309/job/18716042927?pr=12331#step:7:290
[7] https://github.com/curl/curl/actions/runs/6891484996/job/18746659406?pr=12331#step:9:1193
[8] https://github.com/curl/curl/actions/runs/6882803986/job/18722082562?pr=12331#step:33:1870Closes#12331
- tests: verify CMake `DISABLE` options.
Make an exception for 2 CMake-only ones, and one more that's
using a different naming scheme, also in autotools and source.
- cmake: add support for `CURL_DISABLE_HEADERS_API`.
Suggested-by: Daniel Stenberg
Ref: https://github.com/curl/curl/pull/12345#pullrequestreview-1736238641Closes#12353
Before this patch some source files were overriding gcc warning options,
but without restoring them at the end of the file. In CMake UNITY builds
these options spilled over to the remainder of the source code,
effecitvely disabling them for a larger portion of the codebase than
intended.
`#pragma clang diagnostic` didn't have such issue in the codebase.
Reviewed-by: Marcel Raad
Closes#12352
GCC 14 introduces a new -Walloc-size included in -Wextra which gives:
```
src/tool_operate.c: In function ‘add_per_transfer’:
src/tool_operate.c:213:5: warning: allocation of insufficient size ‘1’ for type ‘struct per_transfer’ with size ‘480’ [-Walloc-size]
213 | p = calloc(sizeof(struct per_transfer), 1);
| ^
src/var.c: In function ‘addvariable’:
src/var.c:361:5: warning: allocation of insufficient size ‘1’ for type ‘struct var’ with size ‘32’ [-Walloc-size]
361 | p = calloc(sizeof(struct var), 1);
| ^
```
The calloc prototype is:
```
void *calloc(size_t nmemb, size_t size);
```
So, just swap the number of members and size arguments to match the
prototype, as we're initialising 1 struct of size `sizeof(struct
...)`. GCC then sees we're not doing anything wrong.
Closes#12292
This PR has these changes:
Renaming of unencode_* to cwriter, e.g. client writers
- documentation of sendf.h functions
- move max decode stack checks back to content_encoding.c
- define writer phase which was used as order before
- introduce phases for monitoring inbetween decode phases
- offering default implementations for init/write/close
Add type paramter to client writer's do_write()
- always pass all writes through the writer stack
- writers who only care about BODY data will pass other writes unchanged
add RAW and PROTOCOL client writers
- RAW used for Curl_debug() logging of CURLINFO_DATA_IN
- PROTOCOL used for updates to data->req.bytecount, max_filesize checks and
Curl_pgrsSetDownloadCounter()
- remove all updates of data->req.bytecount and calls to
Curl_pgrsSetDownloadCounter() and Curl_debug() from other code
- adjust test457 expected output to no longer see the excess write
Closes#12184
Previously just ipfs://<cid> and ipns://<cid> was supported, which is
too strict for some usecases.
This patch allows paths and query arguments to be used too.
Making this work according to normal http semantics:
ipfs://<cid>/foo/bar?key=val
ipns://<cid>/foo/bar?key=val
The gateway url support is changed.
It now only supports gateways in the form of:
http://<gateway>/foo/bar
http://<gateway>
Query arguments here are explicitly not allowed and trigger an intended
malformed url error.
There also was a crash when IPFS_PATH was set with a non trailing
forward slash. This has been fixed.
Lastly, a load of test cases have been added to verify the above.
Reported-by: Steven Allen
Fixes#12148Closes#12152
- tunnel https proxy used for http: transfers does
no check if proxy-ssl configuration matches
- test cases added, test_10_12 fails on 8.4.0
Closes#12255
Finding a 'Content-Range:' in the response changed the handling.
Add test case 1475 to verify -C - with 416 and Content-Range: header,
which is almost exactly like test 194 which instead uses a fixed -C
offset. Adjusted test 194 to also be considered fine.
Fixes#10521
Reported-by: Smackd0wn
Fixes#12174
Reported-by: Anubhav Rai
Closes#12176
Connection filter had a `get_select_socks()` method, inspired by the
various `getsocks` functions involved during the lifetime of a
transfer. These, depending on transfer state (CONNECT/DO/DONE/ etc.),
return sockets to monitor and flag if this shall be done for POLLIN
and/or POLLOUT.
Due to this design, sockets and flags could only be added, not
removed. This led to problems in filters like HTTP/2 where flow control
prohibits the sending of data until the peer increases the flow
window. The general transfer loop wants to write, adds POLLOUT, the
socket is writeable but no data can be written.
This leads to cpu busy loops. To prevent that, HTTP/2 did set the
`SEND_HOLD` flag of such a blocked transfer, so the transfer loop cedes
further attempts. This works if only one such filter is involved. If a
HTTP/2 transfer goes through a HTTP/2 proxy, two filters are
setting/clearing this flag and may step on each other's toes.
Connection filters `get_select_socks()` is replaced by
`adjust_pollset()`. They get passed a `struct easy_pollset` that keeps
up to `MAX_SOCKSPEREASYHANDLE` sockets and their `POLLIN|POLLOUT`
flags. This struct is initialized in `multi_getsock()` by calling the
various `getsocks()` implementations based on transfer state, as before.
After protocol handlers/transfer loop have set the sockets and flags
they want, the `easy_pollset` is *always* passed to the filters. Filters
"higher" in the chain are called first, starting at the first
not-yet-connection one. Each filter may add sockets and/or change
flags. When all flags are removed, the socket itself is removed from the
pollset.
Example:
* transfer wants to send, adds POLLOUT
* http/2 filter has a flow control block, removes POLLOUT and adds
POLLIN (it is waiting on a WINDOW_UPDATE from the server)
* TLS filter is connected and changes nothing
* h2-proxy filter also has a flow control block on its tunnel stream,
removes POLLOUT and adds POLLIN also.
* socket filter is connected and changes nothing
* The resulting pollset is then mixed together with all other transfers
and their pollsets, just as before.
Use of `SEND_HOLD` is no longer necessary in the filters.
All filters are adapted for the changed method. The handling in
`multi.c` has been adjusted, but its state handling the the protocol
handlers' `getsocks` method are untouched.
The most affected filters are http/2, ngtcp2, quiche and h2-proxy. TLS
filters needed to be adjusted for the connecting handshake read/write
handling.
No noticeable difference in performance was detected in local scorecard
runs.
Closes#11833
Put the instructions to run tests right at the top of tests/README.md.
Give instructions to read the runtests.1 man page for information
about flags. Delete redundant copy of the flags documentation in the
README.
Add a mention in README.md of the important parallelism flag, to make
test runs go much faster.
Move documentation of output line format into the runtests.1 man page,
and update it with missing flags.
Fix the order of two flags in the man page.
Closes#12193
Python precheck/postcheck alternatives were included but commented out.
Since these are not used and perl is guaranteed to be available to run
the perl versions anyway, the Python ones are removed.
The checkcmd() and checktestcmd() functions would not have worked on
Windows due to hard-coding the UNIX PATH separator character and not
adding .exe file extension. This meant that tools like stunnel, valgrind
and nghttpx would not have been found and used on Windows, and
inspection of previous test runs show none of those being found in pure
Windows CI builds.
With this fixed, they can be used to detect the handle64.exe program
before attempting to use it. When handle64.exe was called
unconditionally without it existing, it caused perl to abort the test
run with the error
The running command stopped because the preference variable
"ErrorActionPreference" or common parameter is set to Stop:
sh: handle64.exe: command not found
Closes#12115
- Add additional checking for missing and too-short SOCKS5 handshake
messages.
Prior to this change the SOCKS5 test server did not check that all parts
of the handshake were received successfully. If those parts were missing
or too short then the server would access uninitialized memory.
This issue was discovered in CI job 'memory-sanitizer' test results.
Test 2055 was failing due to the SOCKS5 test server not running. It was
not running because either it crashed or memory sanitizer aborted it
during Test 728. Test 728 connects to the SOCKS5 test server on a
redirect but does not send any data on purpose. The test server was not
prepared for that.
Reported-by: Dan Fandrich
Fixes https://github.com/curl/curl/issues/12117
Closes https://github.com/curl/curl/pull/12118
This test would show an error message if the output was missing during
the log post-processing step, but the message was not captured by the
test harness and wasn't useful since the normal golden log file
comparison would the problem more clearly.
Prior to this change the state machine attempted to change the remote
resolve to a local resolve if the hostname was longer than 255
characters. Unfortunately that did not work as intended and caused a
security issue.
Bug: https://curl.se/docs/CVE-2023-38545.html
- make result print collected write data, unless
change in meta flags is detected
- will show same result even when data arrives via
several writecb invocations
Closes#12068
If a client disconnected and reconnected quickly, before the ftp server
had a chance to respond, the protocol message/ack (ping/pong) sequence
got out of sync, causing messages sent to the old client to be delivered
to the new. A disconnect must now be acknowledged and intermediate
requests thrown out until it is, which ensures that such synchronization
problems can't occur. This problem could affect ftp, pop3, imap and smtp
tests.
Fixes#12002Closes#12049
The test otherwise could do just about anything (except leak memory in
debug mode) and its bad behaviour wouldn't be detected. Now, check the
resulting cookie file to ensure the cookies are still there.
Closes#12041
msys2 builds actually hit the connect timeout in normal operation, so
lower the timeout from 5 minutes to 5 seconds to reduce test time.
Ref: #11328Closes#12036
ftpserver.pl correctly cleans up spawned server processes,
but forgets to wait for the shell used to spawn them.
This is barely noticeable during a normal testrun,
but causes process exhaustion and test failure
during a complete torture run of the FTP tests.
Fixes#12018Closes#12020
Use the test macros to automatically propagate some errors, and check
and log others while running the tests. This can help in debugging
exactly why a test has failed.
On an overloaded server, the default 1 second timeout can go by without
the test server having a chance to respond with the expected headers,
causing tests to fail. Increase the 1 second timeout to 99 seconds so
this failure mode is no longer a problem on test 1129. Some other tests
already set a high value, but make them consistently 99 seconds so if
something goes wrong the test is stalled for less time.
Ref: #11328
This was already done for automake builds but CMake builds were missed.
Test 1086 actually causes the test harness to crash with:
Warning: unable to close filehandle DWRITE properly: Broken pipe at C:/projects/curl/tests/ftpserver.pl line 527
Rather than fix it now, this change leaves test 1086 entirely skipped on
those builds that show this problem.
Follow-up to 589dca761
Ref: #11865
The threee tags `<name>`, `</name>` and `<command>` were frequently used
with a leading space that this removes. The reason this habbit is so
widespread in testcases is probably that they have been copy and pasted.
Hence, fixing them all now might curb this practice from now on.
Closes#12028
- refs #11982 where it was noted that paused transfers may
close successfully without delivering the complete data
- made sample poc into tests/http/client/h2-pausing.c and
added test_02_27 to reproduce
Closes#11989Fixes#11982
Reported-by: Harry Sintonen
It sometimes happens that a test hangs during a test run and never
returns. The test harness will wait indefinitely for the results and on
CI servers the CI job will eventually be killed after an hour or two.
At the end of a test run, if results haven't come in within a couple of
minutes, display the status of all test runners and what tests they're
running to help in debugging the problem.
This feature is really only kick in with parallel testing enabled, which
is fine because without parallel testing it's usually easy to tell what
test has hung.
Closes#11980
1. References to curl symbols are now checked that they indeed exist as
man pages. This for \f references as well as the names referenced in the
SEE ALSO section.
Allowlist curl.1 since it is not always built in builds
2. References to curl symbols that lack section now causes warning, since that
will prevent them from getting linked properly
3. Check for "bare" references to curl functions and warn, they should be
references
Closes#11963
- Enforce a single reference per .BR line
- Skip the quotes around the section number for example (3)
- Insist on trailing commas on all lines except the last
- Error on comma on the last SEE ALSO entry
- List the entries alpha-sorted, not enforced just recommended
Closes#11957
Delete checks and guards for standard C89 headers and assume these are
available: `stdio.h`, `string.h`, `time.h`, `setjmp.h`, `stdlib.h`,
`stddef.h`, `signal.h`.
Some of these we already used unconditionally, some others we only used
for feature checks.
Follow-up to 9c7165e96a#11918 (for `stdio.h` in CMake)
Closes#11940