- Update the OpenSSL connect state machine to handle
SSL_ERROR_WANT_RETRY_VERIFY.
This allows libcurl users that are using custom certificate validation
to suspend processing while waiting for external I/O during certificate
validation.
Closes https://github.com/curl/curl/pull/11499
Fix the distcheck CI failure and delete more NSS references.
Follow-up to 7c8bae0d9c
Reviewed-by: Marcel Raad
Reviewed-by: Daniel Stenberg
Closes#11548
- Implement AUTH=+LOGIN for CURLOPT_LOGIN_OPTIONS to prefer plaintext
LOGIN over SASL auth.
Prior to this change there was no method to be able to fall back to
LOGIN if an IMAP server advertises SASL capabilities. However, this may
be desirable for e.g. a misconfigured server.
Per: https://www.ietf.org/rfc/rfc5092.html#section-3.2
";AUTH=<enc-auth-type>" looks to be the correct way to specify what
authenication method to use, regardless of SASL or not.
Closes https://github.com/curl/curl/pull/10041
- all: SEE ALSO the libcurl-ws man page
- send: add example and return value information
- meta: mention that the returned data is read-only
Closes#11318
- add an `id` long to Curl_easy, -1 on init
- once added to a multi (or its own multi), it gets
a non-negative number assigned by the connection cache
- `id` is unique among all transfers using the same
cache until reaching LONG_MAX where it will wrap
around. So, not unique eternally.
- CURLINFO_CONN_ID returns the connection id attached to
data or, if none present, data->state.lastconnect_id
- variables and type declared in tool for write out
Closes#11185
The behavior of CURLOPT_UPLOAD differs from what is described in the
documentation. The option automatically adds the 'Transfer-Encoding:
chunked' header if the upload size is unknown.
Closes#11300
These two functions were added in 7.44.0 when CURLMOPT_PUSHFUNCTION was
introduced but always lived a life in the shadows, embedded in the
CURLMOPT_PUSHFUNCTION man page. Until now.
It makes better sense and gives more visibility to document them in
their own stand-alone man pages.
Closes#11286
Previously the code would just do that for the path when extracting the
full URL, which made a subsequent curl_url_get() of the path to
(unexpectedly) still return it without the leading path.
Amend lib1560 to verify this. Clarify the curl_url_set() docs about it.
Bug: https://curl.se/mail/lib-2023-06/0015.htmlCloses#11272
Reported-by: Pedro Henrique
Hook the new (1.11.0 or newer) libssh2 support for setting a read timeout
into the SERVER_RESPONSE_TIMEOUT option. With this done, clients can use
the standard curl response timeout setting to also control the time that
libssh2 will wait for packets from a slow server. This is necessary to
enable use of very slow SFTP servers.
Signed-off-by: Daniel Silverstone <daniel.silverstone@codethink.co.uk>
Closes#10965
To reduce the damage an application can cause if using -1 or other
ridiculous timeout values and letting the cache live long times.
The maximum number of entries in the DNS cache is now totally
arbitrarily and hard-coded set to 29999.
Closes#11084
I was reading curl_unescape(3) and I noticed that there was an extra
space after the open parenthesis in the SYNOPSIS; I removed the extra
space.
I also ran a few grep -r commands to find and remove extra spaces
after '(' in other files, and to find and replace uses of `T*' instead
of `T *'. Some of the instances of `T*` where unnecessary casts that I
removed.
I also fixed a comment that was misaligned in CURLMOPT_SOCKETFUNCTION.3.
And I fixed some formatting inconsistencies: in curl_unescape(3), all
function parameter were mentioned with bold text except length, that was
mentioned as 'length'; and, in curl_easy_unescape(3), all parameters
were mentioned in bold text except url that was italicised. Now they are
all mentioned in bold.
Documentation is not very consistent in how function parameter are
formatted: many pages italicise them, and others display them in bold
text; but I think it makes sense to at least be consistent with
formatting within the same page.
Closes#11027
- remove the version numbers
- simplify the texts
The date and version number will be put there for releases when maketgz
runs the updatemanpages.pl script.
Closes#11029
- remove h3 issues believed to be fixed
- make the flaky CI issue be generic and not Windows specific
- "TLS session cache does not work with TFO" now documented
This is now a documented restriction and not a bug. TFO in general is
rarely used and has other problems, making it a low-priotity thing to
work on.
- remove "Renegotiate from server may cause hang for OpenSSL backend"
This is an OpenSSL issue, not a curl one. Even if it taints curl.
- rm "make distclean loops forever"
- rm "configure finding libs in wrong directory"
Added a section to docs/INSTALL.md about it.
- "A shared connection cache is not thread-safe"
Moved over to TODO and expanded for other sharing improvements we
could do
- rm "CURLOPT_OPENSOCKETPAIRFUNCTION is missing"
- rm "Blocking socket operations in non-blocking API"
Already listed as a TODO
- rm "curl compiled on OSX 10.13 failed to run on OSX 10.10"
Water under the bridge. No one cares about this anymore.
- rm "build on Linux links libcurl to libdl"
Verified to not be true (anymore).
- rm "libpsl is not supported"
The cmake build supports it since cafb356e19Closes#10963
* Configure changes to detect AWS-LC
* CMakeLists.txt changes to detect AWS-LC
* Compile-time branches needed to support AWS-LC
* Correctly set OSSL_VERSION and report AWS-LC release number
* GitHub Actions script to build with autoconf and cmake against AWS-LC
AWS-LC is a BoringSSL/OpenSSL derivative
For more information see https://github.com/awslabs/aws-lc/Closes#10320
The test does a slightly ugly busy-loop for this case but should be
managable due to it likely being a very short moment.
Mention CURLE_AGAIN in curl_ws_recv.3
Fixes#10760
Reported-by: Jay Satiro
Closes#10781
all s3 requests default to UNSIGNED-PAYLOAD and add the required
x-amz-content-sha256 header. this allows CURLAUTH_AWS_SIGV4 to correctly
sign s3 requests to amazon with no additional configuration
Signed-off-by: Casey Bodley <cbodley@redhat.com>
Closes#9995
It results in error "NSS error -5985 (PR_ADDRESS_NOT_SUPPORTED_ERROR)"
Disabled test 1470 for NSS builds and documented the restriction.
Reported-by: Dan Fandrich
Fixes#10723Closes#10734
Current test for curl_free() may produce warnings with strict compiler
flags or even with default compiler flags with upcoming versions.
These warning could turned into errors by -Werror or similar flags.
Such warnings/errors are avoided by this patch.
Closes#10710
The variable had a few different names. Now try to use 'clientp'
consistently for all man pages using a custom pointer set by the
application.
Reported-by: Gerrit Renker
Fixes#10434Closes#10435
Bump the limit from 512K. There might be reasons for applications using
h3 to set larger buffers and there is no strong reason for curl to have
a very small maximum.
Ref: https://curl.se/mail/lib-2023-01/0026.htmlCloses#10256
- Warn that in Windows if libcurl is running from a DLL and if
CURLOPT_HEADERDATA is set then CURLOPT_WRITEFUNCTION or
CURLOPT_HEADERFUNCTION must be set as well, otherwise the user may
experience crashes.
We already have a similar warning in CURLOPT_WRITEDATA. Basically, in
Windows libcurl could crash writing a FILE pointer that was created by
a different C runtime. In Windows each DLL that is part of a program may
or may not have its own C runtime.
Ref: https://github.com/curl/curl/issues/10231
Closes https://github.com/curl/curl/pull/10233
- they are mostly pointless in all major jurisdictions
- many big corporations and projects already don't use them
- saves us from pointless churn
- git keeps history for us
- the year range is kept in COPYING
checksrc is updated to allow non-year using copyright statements
Closes#10205
- curl_ws_send returns CURLE_SEND_ERROR if data->conn is gone
- curl_ws_recv returns CURLE_GOT_NOTHING on connection close
- curl_ws_recv.3: mention new return code for connection close + example
embryo
Closes#10084
- CURL_GLOBAL_SSL
This option was changed in libcurl 7.57.0 and clearly it has not caused
too many issues and a lot of time has passed.
- Store TLS context per transfer instead of per connection
This is a possible future optimization. One that is much less important
and interesting since the added support for CA caching.
- Microsoft telnet server
This bug was filed in May 2007 against curl 7.16.1 and we have not
received further reports.
- active FTP over a SOCKS
Actually, proxies in general is not working with active FTP mode. This
is now added in proxy documentation.
- DICT responses show the underlying protocol
curl still does this, but since this is now an established behavior
since forever we cannot change it easily and adding an option for it
seems crazy as this protocol is not so little its not worth it. Let's
just live with it.
- Secure Transport disabling hostname validation also disables SNI
This is an already documented restriction in Secure Transport.
- CURLOPT_SEEKFUNCTION not called with CURLFORM_STREAM
The curl_formadd() function is marked and documented as deprecated. No
point in collecting bugs for it. It should not be used further.
- STARTTRANSFER time is wrong for HTTP POSTs
After close source code inspection I cannot see how this is true or that
there is any special treatment for different HTTP methods. We also have
not received many further reports on this, making me strongly suspect
that this is no (longer an) issue.
- multipart formposts file name encoding
The once proposed RFC 5987-encoding is since RFC 7578 documented as MUST
NOT be used. The since then implemented MIME API allows the user to set
the name on their own and can thus provide it encoded as it wants.
- DoH is not used for all name resolves when enabled
It is questionable if users actually want to use DoH for interface and
FTP port name resolving. This restriction is now documented and we
advice users against using name resolving at all for these functions.
Closes#10043
Deprecation and removal of codeset conversion support from the library
have released the strict need for an early binding of mime structures to
an easy handle (https://github.com/curl/curl/commit/2610142).
This constraint currently forces to create the handle before the mime
structure and the latter cannot be attached to another handle once
created (see https://curl.se/mail/lib-2022-08/0027.html).
This commit removes the handle pointers from the mime structures
allowing more flexibility on their use.
When an easy handle is duplicated, bound mime structures must however
still be duplicated too as their components hold send-time dynamic
information.
Closes#9927
- "FTP with CONNECT and slow server"
I believe this is not a problem these days.
- "FTP with NULs in URL parts"
The FTP protocol does not support them properly anyway.
- remove "FTP and empty path parts in the URL"
I don't think this has ever been reported as a real problem but was only
a hypothetical one.
- "Premature transfer end but healthy control channel"
This is not a bug, this is an optimization that *could* be performed but is
not an actual problem.
- "FTP without or slow 220 response"
Instead add to the documentation of the connect timeout that the
connection is considered complete at TCP/TLS/QUIC layer.
Closes#9979
`Curl_output_aws_sigv4()` doesn't always have the whole payload in
memory to generate a real payload hash. this commit allows the user to
pass in a header like `x-amz-content-sha256` to provide their desired
payload hash
some services like s3 require this header, and may support other values
like s3's `UNSIGNED-PAYLOAD` and `STREAMING-AWS4-HMAC-SHA256-PAYLOAD`
with special semantics. servers use this header's value as the payload
hash during signature validation, so it must match what the client uses
to generate the signature
CURLOPT_AWS_SIGV4.3 now describes the content-sha256 interaction
Signed-off-by: Casey Bodley <cbodley@redhat.com>
Closes#9804
Add a deprecated attribute to functions and enum values that should not
be used anymore.
This uses a gcc 4.3 dialect, thus is only available for this version of
gcc and newer. Note that the _Pragma() keyword is introduced by C99, but
is available as part of the gcc dialect even when compiling in C89 mode.
It is still possible to disable deprecation at a calling module compile
time by defining CURL_DISABLE_DEPRECATION.
Gcc type checking macros are made aware of possible deprecations.
Some testing support Perl programs are adapted to the extended
declaration syntax.
Several test and unit test C programs intentionally use deprecated
functions/options and are annotated to not generate a warning.
New test 1222 checks the deprecation status in doc and header files.
Closes#9667
Field feature_names contains a null-terminated sorted array of feature
names. Bitmask field features is deprecated.
Documentation is updated. Test 1177 and tests/version-scan.pl updated to
match new documentation format and extended to check feature names too.
Closes#9583
Prior to this change if the user wanted to signal an error from their
write callbacks they would have to use logic to return a value different
from the number of bytes (nmemb) passed to the callback. Also, the
inclination of some users has been to just return 0 to signal error,
which is incorrect as that may be the number of bytes passed to the
callback.
To remedy this the user can now return CURL_WRITEFUNC_ERROR instead.
Ref: https://github.com/curl/curl/issues/9873
Closes https://github.com/curl/curl/pull/9874
Adds a new option to control the maximum time that a cached
certificate store may be retained for.
Currently only the OpenSSL backend implements support for
caching certificate stores.
Closes#9620
A regfression in 7.86.0 (via 1e9a538e05) made the tailmatch work
differently than before. This restores the logic to how it used to work:
All names listed in NO_PROXY are tailmatched against the used domain
name, if the lengths are identical it needs a full match.
Update the docs, update test 1614.
Reported-by: Stuart Henderson
Fixes#9842Closes#9858
For both IPv4 and IPv6 addresses. Now also checks IPv6 addresses "correctly"
and not with string comparisons.
Split out the noproxy checks and functionality into noproxy.c
Added unit test 1614 to verify checking functions.
Reported-by: Mathieu Carbonneaux
Fixes#9773Fixes#5745Closes#9775
"You never needed a pass phrase" reads like it's about to be followed by
something like "until version so-and-so", but that is not what is
intended. Change to "You never need a pass phrase". There are two
instances of this text, so make sure to update both.
curl_ws_recv() now receives data to fill up the provided buffer, but can
return a partial fragment. The function now also get a pointer to a
curl_ws_frame struct with metadata that also mentions the offset and
total size of the fragment (of which you might be receiving a smaller
piece). This way, large incoming fragments will be "streamed" to the
application. When the curl_ws_frame struct field 'bytesleft' is 0, the
final fragment piece has been delivered.
curl_ws_recv() was also adjusted to work with a buffer size smaller than
the fragment size. (Possibly needless to say as the fragment size can
now be 63 bit large).
curl_ws_send() now supports sending a piece of a fragment, in a
streaming manner, in addition to sending the entire fragment in a single
call if it is small enough. To send a huge fragment, curl_ws_send() can
be used to send it in many small calls by first telling libcurl about
the total expected fragment size, and then send the payload in N number
of separate invokes and libcurl will stream those over the wire.
The struct curl_ws_meta() returns is now called 'curl_ws_frame' and it
has been extended with two new fields: *offset* and *bytesleft*. To help
describe the passed on data chunk when a fragment is delivered in many
smaller pieces.
The documentation has been updated accordingly.
Closes#9636
The former way that also suggested using a non-existing file to just
enable the cookie engine could lead to developers maybe a bit carelessly
guessing a file name that will not exist, and then in a future due to
circumstances, such a file could be made to exist and then accidentally
libcurl would read cookies not actually meant to.
Reported-by: Trail of bits
Closes#9654
The introduction of CURL_DISABLE_MIME came with some additional bugs:
- Disabled MIME is compiled-in anyway if SMTP and/or IMAP is enabled.
- CURLOPT_MIMEPOST, CURLOPT_MIME_OPTIONS and CURLOPT_HTTPHEADER are
conditioned on HTTP, although also needed for SMTP and IMAP MIME mail
uploads.
In addition, the CURLOPT_HTTPHEADER and --header documentation does not
mention their use for MIME mail.
This commit fixes the problems above.
Closes#9610
This is how the RFC calls the protocol. Also rename the file in docs/ to
WEBSOCKET.md in uppercase to match how we have done it for many other
protocol docs in similar fashion.
Add the WebSocket docs to the tarball.
Closes#9496
Next Protocol Negotiation is a TLS extension that was created and used
for agreeing to use the SPDY protocol (the precursor to HTTP/2) for
HTTPS. In the early days of HTTP/2, before the spec was finalized and
shipped, the protocol could be enabled using this extension with some
servers.
curl supports the NPN extension with some TLS backends since then, with
a command line option `--npn` and in libcurl with
`CURLOPT_SSL_ENABLE_NPN`.
HTTP/2 proper is made to use the ALPN (Application-Layer Protocol
Negotiation) extension and the NPN extension has no purposes
anymore. The HTTP/2 spec was published in May 2015.
Today, use of NPN in the wild should be extremely rare and most likely
totally extinct. Chrome removed NPN support in Chrome 51, shipped in
June 2016. Removed in Firefox 53, April 2017.
Closes#9307
Lintian (on Debian) has been complaining about this for a while but
I didn't bother initially as the groff parser that we use is not
affected by this.
But I have now noticed that the online manpage is affected by it:
https://curl.se/libcurl/c/CURLOPT_WILDCARDMATCH.html
(I'm using double quotes for quoting-only down below)
The section that should be parsed as "'\'" ends up being parsed as
"'´".
This is due to roffit not parsing "'\\'" correctly, which is fine
as the "correct" way of writing "'\'" is "'\e'" instead.
Note that this fix is not enough to fix the online manpage at
curl's website, as roffit seems to parse it wrongly either way.
My intent is to at least fix the manpage so that roffit can
be changed to parse "'\e'" correctly (although I suggest making
roffit parse both ways correctly, since that's what groff does).
More details at:
https://bugs.debian.org/966803930b18e4b2/tags/a/acute-accent-in-manual-page.tagCloses#9418
- Support TLS 1.3 as the default max TLS version for Windows Server 2022
and Windows 11.
- Support specifying TLS 1.3 ciphers via existing option
CURLOPT_TLS13_CIPHERS (tool: --tls13-ciphers).
Closes https://github.com/curl/curl/pull/8419
Starting now, CURLOPT_FTP_RESPONSE_TIMEOUT is the alias instead of the
other way around.
Since 7.20.0, CURLOPT_SERVER_RESPONSE_TIMEOUT has existed as an alias
but since the option is for more protocols than FTP the more "correct"
version of the option is the "server" one so now we switch.
Closes#9104
... as replacements for deprecated CURLOPT_PROTOCOLS and
CURLOPT_REDIR_PROTOCOLS as these new ones do not risk running into the
32 bit limit the old ones are facing.
CURLINFO_PROTCOOL is now deprecated.
The curl tool is updated to use the new options.
Added test 1597 to verify the libcurl protocol parser.
Closes#8992
During the packaging of the latest curl release for Debian, Lintian
warned me about a typo which causes the section name "Secrets in memory"
to not be rendered in the manpage due to "SH_" not being recognized as a
header.
Closes#9057
.. and update some docs to explain curl_global_* is now thread-safe.
Follow-up to 23af112 which made curl_global_init/cleanup thread-safe.
Closes https://github.com/curl/curl/pull/9016
- Remove misleading text that says progress function "gets called at
least once per second, even if the connection is paused."
The progress function behavior is more nuanced and the user is better
served reading the progress function doc rather than attempt to explain
it in the curl_easy_pause doc.
The progress function can only be called at least once per second if an
appropriate multi transfer function is called (eg curl_multi_perform) in
that time. For a paused transfer there may not be such a call. Rather
than explain this in detail in the curl_easy_pause doc, rely on the user
reading the CURLOPT_PROGRESSFUNCTION doc.
Ref: https://github.com/curl/curl/issues/8983
Closes https://github.com/curl/curl/pull/9015
Referring to Daniel's article [1], making the init function thread-safe
was the last bit to make libcurl thread-safe as a whole. So the name of
the feature may as well be the more concise 'threadsafe', also telling
the story that libcurl is now fully thread-safe, not just its init
function. Chances are high that libcurl wants to remain so in the
future, so there is little likelihood of ever needing any other distinct
`threadsafe-<name>` feature flags.
For consistency we also shorten `CURL_VERSION_THREADSAFE_INIT` to
`CURL_VERSION_THREADSAFE`, update its description and reference libcurl's
thread safety documentation.
[1]: https://daniel.haxx.se/blog/2022/06/08/making-libcurl-init-more-thread-safe/
Reviewed-by: Daniel Stenberg
Reviewed-by: Jay Satiro
Closes#8989
Add licensing and copyright information for all files in this repository. This
either happens in the file itself as a comment header or in the file
`.reuse/dep5`.
This commit also adds a Github workflow to check pull requests and adapts
copyright.pl to the changes.
Closes#8869
Adds two new error codes: CURLE_UNRECOVERABLE_POLL and
CURLM_UNRECOVERABLE_POLL one each for the easy and the multi interfaces.
Reported-by: Harry Sintonen
Fixes#8921Closes#8961
The e-mail link in the advice contains instructions that are prone to
error. We need an example that works and can demonstrate how to properly
perform a ranged upload, and then we can refer to that example instead.
Bug: https://github.com/curl/curl/issues/8969
Reported-by: Simon Berger
Closes https://github.com/curl/curl/pull/8970
This flag can be used to make sure that curl_global_init() is
thread-safe.
This can be useful for libraries that can't control what other
dependencies are doing with Curl.
Closes#8680
The callback set by CURLOPT_SSH_HOSTKEYFUNCTION is called to check
wether or not the connection should continue.
The host key is passed in argument with a custom handle for the
application.
It overrides CURLOPT_SSH_KNOWNHOSTS
Closes#7959
Folded header lines will now get passed through like before. The headers
API is adapted and will provide the content unfolded.
Added test 1274 and extended test 1940 to verify.
Reported-by: Petr Pisar
Fixes#8844Closes#8899
Usage:
curl -x "socks5h://localhost/run/tor/socks" "https://example.com"
Updated runtests.pl to run a socksd server listening on unix socket
Added tests test1467 test1468
Added documentation for proxy command line option and socks proxy
options
Closes#8668
These two options were only ever used for the OpenSSL backend for
versions before 1.1.0. They were never used for other backends and they
are not used with recent OpenSSL versions. They were never used much by
applications.
The defines RANDOM_FILE and EGD_SOCKET can still be set at build-time
for ancient EOL OpenSSL versions.
Closes#8670
The API documentation for the MIME functions specify that the parts
can be set twice, with the last call winning. While true, the user
can set the parts n times for n > 2, reword to specify multiple API
calls instead.
Closes: #8860
Reviewed-by: Daniel Stenberg <daniel@haxx.se>