- test for a simplified C99 variadic check
- args to infof() in --disable-verbose are no longer disregarded but
must compile.
Closes#12167Fixes#12083Fixes#11880Fixes#11891
- resolving is done for a connection, not for every transfer
- save create/dup/free of a cares channel for each transfer
- check values of setopt calls against a local channel if no
connection has been attached yet, when needed.
Closes#12198
Connection filter had a `get_select_socks()` method, inspired by the
various `getsocks` functions involved during the lifetime of a
transfer. These, depending on transfer state (CONNECT/DO/DONE/ etc.),
return sockets to monitor and flag if this shall be done for POLLIN
and/or POLLOUT.
Due to this design, sockets and flags could only be added, not
removed. This led to problems in filters like HTTP/2 where flow control
prohibits the sending of data until the peer increases the flow
window. The general transfer loop wants to write, adds POLLOUT, the
socket is writeable but no data can be written.
This leads to cpu busy loops. To prevent that, HTTP/2 did set the
`SEND_HOLD` flag of such a blocked transfer, so the transfer loop cedes
further attempts. This works if only one such filter is involved. If a
HTTP/2 transfer goes through a HTTP/2 proxy, two filters are
setting/clearing this flag and may step on each other's toes.
Connection filters `get_select_socks()` is replaced by
`adjust_pollset()`. They get passed a `struct easy_pollset` that keeps
up to `MAX_SOCKSPEREASYHANDLE` sockets and their `POLLIN|POLLOUT`
flags. This struct is initialized in `multi_getsock()` by calling the
various `getsocks()` implementations based on transfer state, as before.
After protocol handlers/transfer loop have set the sockets and flags
they want, the `easy_pollset` is *always* passed to the filters. Filters
"higher" in the chain are called first, starting at the first
not-yet-connection one. Each filter may add sockets and/or change
flags. When all flags are removed, the socket itself is removed from the
pollset.
Example:
* transfer wants to send, adds POLLOUT
* http/2 filter has a flow control block, removes POLLOUT and adds
POLLIN (it is waiting on a WINDOW_UPDATE from the server)
* TLS filter is connected and changes nothing
* h2-proxy filter also has a flow control block on its tunnel stream,
removes POLLOUT and adds POLLIN also.
* socket filter is connected and changes nothing
* The resulting pollset is then mixed together with all other transfers
and their pollsets, just as before.
Use of `SEND_HOLD` is no longer necessary in the filters.
All filters are adapted for the changed method. The handling in
`multi.c` has been adjusted, but its state handling the the protocol
handlers' `getsocks` method are untouched.
The most affected filters are http/2, ngtcp2, quiche and h2-proxy. TLS
filters needed to be adjusted for the connecting handshake read/write
handling.
No noticeable difference in performance was detected in local scorecard
runs.
Closes#11833
The goal of this patch is to avoid unnecessary feature detection work
when doing Windows builds with CMake. Do this by pre-filling well-known
detection results for Windows and specifically for mingw-w64 and MSVC
compilers. Also limit feature checks to platforms where the results are
actually used. Drop a few redundant ones. And some tidying up.
- pre-fill remaining detection values in Windows CMake builds.
Based on actual detection results observed in CI runs, preceding
similar work over libssh2 and matching up values with
`lib/config-win32.h`.
This brings down CMake configuration time from 58 to 14 seconds on the
same local machine.
On AppVeyor CI this translates to:
- 128 seconds -> 50 seconds VS2022 MSVC with OpenSSL (per CMake job):
https://ci.appveyor.com/project/curlorg/curl/builds/48208419/job/4gw66ecrjpy7necb#L296https://ci.appveyor.com/project/curlorg/curl/builds/48217440/job/8m4fwrr2fe249uo8#L186
- 62 seconds -> 16 seconds VS2017 MINGW (per CMake job):
https://ci.appveyor.com/project/curlorg/curl/builds/48208419/job/s1y8q5ivlcs7ub29?fullLog=true#L290https://ci.appveyor.com/project/curlorg/curl/builds/48217440/job/pchpxyjsyc9kl13a?fullLog=true#L194
The formula is about 1-3 seconds delay for each detection. Almost all
of these trigger a full compile-link cycle behind the scenes, slow
even today, both cross and native, mingw-w64 and apparently MSVC too.
Enabling .map files or other custom build features slows it down
further. (Similar is expected for autotools configure.)
- stop detecting `idn2.h` if idn2 was deselected.
autotools does this.
- stop detecting `idn2.h` if idn2 was not found.
This deviates from autotools. Source code requires both header and
lib, so this is still correct, but faster.
- limit `ADDRESS_FAMILY` detection to Windows.
- normalize `HAVE_WIN32_WINNT` value to lowercase `0x0a12` format.
- pre-fill `HAVE_WIN32_WINNT`-dependent detection results.
Saving 4 (slow) feature-detections in most builds: `getaddrinfo`,
`freeaddrinfo`, `inet_ntop`, `inet_pton`
- fix pre-filled `HAVE_SYS_TIME_H`, `HAVE_SYS_PARAM_H`,
`HAVE_GETTIMEOFDAY` for mingw-w64.
Luckily this do not change build results, as `WIN32` took
priority over `HAVE_GETTIMEOFDAY` with the current source
code.
- limit `HAVE_CLOCK_GETTIME_MONOTONIC_RAW` and
`HAVE_CLOCK_GETTIME_MONOTONIC` detections to non-Windows.
We're not using these in the source code for Windows.
- reduce compiler warning noise in CMake internal logs:
- fix to include `winsock2.h` before `windows.h`.
Apply it to autotools test snippets too.
- delete previous `-D_WINSOCKAPI_=` hack that aimed to fix the above.
- cleanup `CMake/CurlTests.c` to emit less warnings.
- delete redundant `HAVE_MACRO_SIGSETJMP` feature check.
It was the same check as `HAVE_SIGSETJMP`.
- delete 'experimental' marking from `CURL_USE_OPENSSL`.
- show CMake version via `CMakeLists.txt`.
Credit to the `zlib-ng` project for the idea:
61e181c8ae/CMakeLists.txt (L7)
- make `CMake/CurlTests.c` pass `checksrc`.
- `CMake/WindowsCache.cmake` tidy-ups.
- replace `WIN32` guard with `_WIN32` in `CMake/CurlTests.c`.
Closes#12044
Currently the verbose output does not include which algorithms are used
for the signature and key exchange when using OpenSSL. Including the
algorithms used will enable better debugging when working on using new
algorithm implementations. Know what algorithms are used has become more
important with the fast growing research into new quantum-safe
algorithms.
This implementation includes a build time check for the OpenSSL version
to use a new function that will be included in OpenSSL 3.2 that was
introduced in openssl/openssl@6866824
Based-on-patch-by: Martin Schmatz <mrt@zurich.ibm.com>
Closes#12030
Getting nghttp2's error message helps users understand what's going
on. For example when the connection is brought down due a forbidden
header is used - as that header is then not displayed by curl itself.
Example:
curl: (92) Invalid HTTP header field was received: frame type: 1,
stream: 1, name: [upgrade], value: [h2,h2c]
Ref: #12172Closes#12179
... and make the code require both symbol and declaration.
This is because for Android, the symbol is always present in the lib at
build-time even when not actually available in run-time.
Assisted-by: Viktor Szakats
Reported-by: 12932 on github
Fixes#12086Closes#12158
Fixes a minor memory leak on LDAP connection reuse.
Doing the allocation already in *setup_connection() is wrong since that
connect struct might get discarded early when an existing connection is
reused instead.
Closes#12166
Remove the CURL_CA_FALLBACK logic. That build option was added to allow
primarily OpenSSL to use the default paths for loading the CA certs. For
GnuTLS it was instead made to load the "system certs", which is
different and not desirable.
The native CA store loading is now asked for with this option.
Follow-up to 7b55279d1d
Co-authored-by: Jay Satiro
Closes#12137
- fix HTTP header parsing to report incomplete
lines it buffers as consumed!
- re-implement the RTP parser for interleave RTP
messages for robustness. It is now keeping its
state at the connection
- RTSP protocol handler "readwrite" implementation
now tracks if the response is before/in/after
header parsing or "in" a bod by calling
"Curl_http_readwrite_headers()" itself. This
allows it to know when non-RTP bytes are "junk"
or HEADER or BODY.
- tested with #12035 and various small receive
sizes where current master fails
Closes#12052
- fold the code to convert dynhds to the nghttp2 structs
into a dynhds internal method
- saves code duplication
- pacifies compiler analyzers
Closes#12097
To avoid the state machine to start over and redownload all the files
*again*.
Reported-by: lkordos on github
Regression from 843b3baa3e (shipped in 8.1.0)
Bisect-by: Dan Fandrich
Fixes#11775Closes#12156
- Remove DEBUGASSERT that an internal handle must not have user
private_data set before calling the user's debug callback.
This is a follow-up to 0dc40b2a. The user can distinguish their easy
handle from an internal easy handle by setting CURLOPT_PRIVATE on their
easy handle. I had wrongly assumed that meant the user couldn't then
set CURLOPT_PRIVATE on an internal handle as well.
Bug: https://github.com/curl/curl/pull/12060#issuecomment-1754594697
Reported-by: Daniel Stenberg
Closes https://github.com/curl/curl/pull/12104
- configure a 120s idle timeout on our side of the connection
- track the timestamp when actual socket IO happens
- check IO timestamp to our *and* the peer's idle timeouts
in "is this connection alive" checks
Reported-by: calvin2021y on github
Fixes#12064Closes#12077
populate_binsettings now returns a negative value on error, instead of a
huge positive value. Both places which call this function have been
updated to handle this change in its contract.
The way populate_binsettings had been used prior to this change the huge
positive values -- due to signed->unsigned conversion of the potentially
negative result of nghttp2_pack_settings_payload which returns negative
values on error -- are not possible. But only because http2.c currently
always provides a large enough output buffer and provides H2 SETTINGS
IVs which pass the verification logic inside nghttp2. If the
verification logic were to change or if http2.c started passing in more
IVs without increasing the output buffer size, the overflow could become
reachable, and libcurl/curl might start leaking memory contents to
servers/proxies...
Closes#12101
This define is set in wolfssl's options.h file when this function and
feature is present. Handles both builds with the feature explicitly
disabled and wolfSSL versions before 5.5.2 - which introduced this API
call.
Closes#12108
- Add CURL_DISABLE_GETOPTIONS to curl_config.h.cmake.
Prior to this change the option had no effect because it was missing
from that file.
Closes https://github.com/curl/curl/pull/12091
Prior to this change the state machine attempted to change the remote
resolve to a local resolve if the hostname was longer than 255
characters. Unfortunately that did not work as intended and caused a
security issue.
Bug: https://curl.se/docs/CVE-2023-38545.html
- add `mq->recvbuf` to provide buffering of incomplete
ACK responses
- continue ACK reading until sufficient bytes available
- fixes test failures on low network receives
Closes#12071
Syncing this up with CMake.
Source code uses the built-in `OPENSSL_IS_AWSLC` and
`OPENSSL_IS_BORINSSL` macros to detect BoringSSL and AWS-LC. No help is
necessary from the build tools.
The one use of `HAVE_BORINGSSL` in the source turned out to be no longer
necessary for warning-free BoringSSL + Schannel builds. Ref: #1610#2634
autotools detects this anyway for display purposes.
CMake detects this to decide whether to use the BoringSSL-specific
crypto lib with ngtcp2. It detects AWS-LC, but doesn't use the detection
result just yet (planned in #12066).
Ref: #11964
Reviewed-by: Daniel Stenberg
Reviewed-by: Jay Satiro
Closes#12065
add 2 env variables for non-UDP sockets:
1. CURL_DBG_SOCK_RBLOCK: percentage of receive calls that randomly
should return EAGAIN
2. CURL_DBG_SOCK_RMAX: max amount of bytes read from socket
Closes#12035
- answer HTTP/2 streams refused via a GOAWAY from the server to
respond with CURLE_RECV_ERROR in order to trigger a retry
on another connection
Reported-by: black-desk on github
Ref #11859Closes#12054
- Warn that the user's debug callback may be called with the handle
parameter set to an internal handle.
Without this warning the user may assume that the only handles their
debug callback receives are the easy handles on which they set
CURLOPT_DEBUGFUNCTION.
This is a follow-up to f8cee8cc which changed DoH handles to inherit
the debug callback function set in the user's easy handle. As a result
those handles are now passed to the user's debug callback function.
Closes https://github.com/curl/curl/pull/12034
... when it does a state transition but there is no particular socket or
timer activity. This was made apparent when commit b5bb84c removed a
superfluous timer expiry.
Reported-by: Dan Fandrich.
Fixes#12033Closes#12056
While the struct is still public in OpenSSL, there is a (somewhat
inconvenient) accessor. Use it to remain compatible if it becomes opaque
in the future.
Closes#12038
- Return CURLE_URL_MALFORMAT if IDN hostname cannot be converted from
UTF-8 to UTF-16.
Prior to this change a failed conversion erroneously returned CURLE_OK
which meant 'decoded' pointer (what would normally point to the
punycode) would not be written to, remain NULL and be dereferenced
causing an access violation.
Closes https://github.com/curl/curl/pull/11983
Since the tool itself now uses the base64 code using the curlx way, it
needs to build also when the tool needs it. Starting now, the tool build
defines BULDING_CURL to allow lib-side code to use it.
Follow-up to 2e160c9c65Closes#12010
By using unique static function/variable names in source files
implementing these interfaces.
- OpenLDAP combined with any SSH backend.
- MultiSSL with mbedTLS, OpenSSL, wolfSSL, SecureTransport.
Closes#12027
Found the root cause of the startup crash in unity builds with Unicode
and TrackMemory enabled at the same time.
We must make sure that the `memdebug.h` header doesn't apply to
`lib/curl_multibyte.c` (as even noted in a comment there.) In unity
builds all headers apply to all sources, including `curl_multibyte.c`.
This probably resulted in an infinite loop on startup.
Exclude this source from unity compilation with TrackMemory enabled,
in both libcurl and curl tool. Enable unity mode for a debug Unicode
CI job to keep it tested. Also delete the earlier workaround that
fully disabled unity for affected builds.
Follow-up to d82b080f63#12005
Follow-up to 3f8fc25720#11095Closes#11928
- refs #11982 where it was noted that paused transfers may
close successfully without delivering the complete data
- made sample poc into tests/http/client/h2-pausing.c and
added test_02_27 to reproduce
Closes#11989Fixes#11982
Reported-by: Harry Sintonen
The default wolfSSL_CTX_load_verify_locations() function is quite picky
with the certificates it loads and will for example return error if just
one of the certs has expired.
With the *_ex() function and its WOLFSSL_LOAD_FLAG_IGNORE_ERR flag, it
behaves more similar to what OpenSSL does by default.
Even the set of default certs on my Debian unstable has several expired
ones.
Assisted-by: Juliusz Sosinowicz
Assisted-by: Michael Osipov
Closes#11987
- check for arc4random. To make rand.c use it accordingly.
- check for fcntl
- fix fseek detection
- add SIZEOF_CURL_SOCKET_T
- fix USE_UNIX_SOCKETS
- define HAVE_SNPRINTF to 1
- check for fnmatch
- check for sched_yield
- remove HAVE_GETPPID duplicate from curl_config.h
- add HAVE_SENDMSG
Ref: #11964
Co-authored-by: Viktor Szakats
Closes#11973
With new option `CURL_DISABLE_SRP=ON` to force-disable it.
To match existing option and detection logic in autotools.
Also:
- fix detecting GnuTLS.
We assume `nettle` as a GnuTLS dependency.
- add CMake GnuTLS CI job.
- bump AppVeyor CMake OpenSSL MSVC job to OpenSSL 1.1.1 (from 1.0.2)
TLS-SRP fails to detect with 1.0.2 due to an OpenSSL header bug.
- fix compiler warning when building with GnuTLS and disabled TLS-SRP.
- fix comment typos, whitespace.
Ref: #11964Closes#11967
- move definitions from content_encoding.h to sendf.h
- move create/cleanup/add code into sendf.c
- installed content_encoding writers will always be called
on Curl_client_write(CLIENTWRITE_BODY)
- Curl_client_cleanup() frees writers and tempbuffers from
paused transfers, irregardless of protocol
Closes#11908
Curl_timediff rounds down to the millisecond, so curl_multi_perform can
be called too early, then we get a timeout of 0 and call it again.
The code already handled the case of timeouts which expired less than
1ms in the future. By rounding up, we make sure we will never ask the
platform to wake up too early.
Closes#11938
CID 1024653: Integer handling issues (SIGN_EXTENSION)
Suspicious implicit sign extension: "src[i]" with type "unsigned char
const" (8 bits, unsigned) is promoted in "src[i] << (1 - i % 2 << 3)" to
type "int" (32 bits, signed), then sign-extended to type "unsigned long"
(64 bits, unsigned). If "src[i] << (1 - i % 2 << 3)" is greater than
0x7FFFFFFF, the upper bits of the result will all be 1.
111 words[i/2] |= (src[i] << ((1 - (i % 2)) << 3));
The value will not be greater than 0x7FFFFFFF so this still cannot
happen.
Also, switch to ints here instead of longs. The values stored are 16 bit
so at least no need to use 64 bit variables. Also, longs are 32 bit on
some platforms so this logic still needs to work with 32 bits.
Closes#11960
... so that it gets called again immediately and can continue trying
addresses to connect to. Otherwise it might unnecessarily wait for a
while there.
Fixes#11920
Reported-by: Loïc Yhuel
Closes#11935
- `HAVE_MEMRCHR` for `memrchr`.
- `HAVE_GETIFADDRS` for `getifaddrs`.
This was present in `lib/curl_config.h.cmake` but missed the detection
logic.
To match existing autotools feature checks.
Closes#11954
Delete checks and guards for standard C89 headers and assume these are
available: `stdio.h`, `string.h`, `time.h`, `setjmp.h`, `stdlib.h`,
`stddef.h`, `signal.h`.
Some of these we already used unconditionally, some others we only used
for feature checks.
Follow-up to 9c7165e96a#11918 (for `stdio.h` in CMake)
Closes#11940
- If SSL shutdown is not finished then make an additional call to
SSL_read to gather additional tracing.
- Fix http2 and h2-proxy filters to forward do_close() calls to the next
filter.
For example h2 and SSL shutdown before and after this change:
Before:
Curl_conn_close -> cf_hc_close -> Curl_conn_cf_discard_chain ->
ssl_cf_destroy
After:
Curl_conn_close -> cf_hc_close -> cf_h2_close -> cf_setup_close ->
ssl_cf_close
Note that currently the tracing does not show output on the connection
closure handle. Refer to discussion in #11878.
Ref: https://github.com/curl/curl/discussions/11878
Closes https://github.com/curl/curl/pull/11858
Since Curl_timediff rounds down to the millisecond, timeouts which
expire in less than 1ms are considered as outdated and removed from the
list. We can use Curl_timediff_us instead, big timeouts could saturate
but this is not an issue.
Closes#11937
- always define `CURL_STATICLIB` when building libcurl for Windows.
This disables `__declspec(dllexport)` for exported libcurl symbols.
In normal mode (hide symbols) these exported symbols are specified
via `libcurl.def`. When not hiding symbols, all symbols are exported
by default.
Regression from 1199308dbcFixes#11844
- fix to omit `libcurl.def` when not hiding private symbols.
Regression from 2ebc74c36a
- fix `ENABLED_DEBUG=ON` + shared curl tool Windows builds by also
omitting `libcurl.def` in this case, and exporting all symbols
instead. This ensures that a shared curl tool can access all debug
functions which are not normally exported from libcurl DLL.
- delete `INTERFACE_COMPILE_DEFINITIONS "CURL_STATICLIB"` for "objects"
target.
Follow-up to 2ebc74c36a
- delete duplicate `BUILDING_LIBCURL` definitions.
- fix `HIDES_CURL_PRIVATE_SYMBOLS` to not overwrite earlier build settings.
Follow-up to 1199308dbcCloses#11914
- use shared code for setting up the CONNECT request
when tunneling, used in HTTP/1.x and HTTP/2 proxying
- eliminate use of Curl_buffer_send() and other manipulations
of `data->req` or `data->state.ulbuf`
Closes#11808
fseek uses long offset which does not match with curl_off_t. This leads
to undefined behavior when calling the callback and caused failure on
arm 32 bit.
Use a wrapper to solve this and use fseeko which uses off_t instead of
long.
Thanks to the nice people at Libera IRC #musl for helping finding this
out.
Fixes#11882Fixes#11900Closes#11918
A debug-only function that is basically never used. Removed to ease the
use of the singleuse script to detect non-static functions not used
outside the file where it is defined.
Closes#11931
- Fix netrc info message to use the generic ".netrc" filename if the
user did not specify a netrc location.
- Update --netrc doc to add that recent versions of curl on Windows
prefer .netrc over _netrc.
Before:
* Couldn't find host google.com in the (nil) file; using defaults
After:
* Couldn't find host google.com in the .netrc file; using defaults
Closes https://github.com/curl/curl/pull/11904
Also fix to export all symbols in Windows debug builds, making
`-debug-dyn` builds work with `-DCURL_STATICLIB` set.
Ref: https://github.com/curl/curl/pull/11914 (same for CMake)
Closes#11924
Previously it would only stop them from getting started if the size is
known to be too big then.
Update the libcurl and curl docs accordingly.
Fixes#11810
Reported-by: Elliot Killick
Assisted-by: Jay Satiro
Closes#11820
- If libssh2_userauth_publickey_fromfile_ex returns -1 then show error
message "SSH public key authentication failed: Reason unknown (-1)".
When libssh2_userauth_publickey_fromfile_ex returns -1 it does so as a
generic error and therefore doesn't set an error message. AFAICT that is
not documented behavior.
Prior to this change libcurl retrieved the last set error message which
would be from a previous function failing. That resulted in misleading
auth failed error messages in verbose mode.
Bug: https://github.com/curl/curl/issues/11837#issue-1891827355
Reported-by: consulion@users.noreply.github.com
Closes https://github.com/curl/curl/pull/11881
- use CLIENTWRITE_BODY *only* when data is actually body data
- add CLIENTWRITE_INFO for meta data that is *not* a HEADER
- debug assertions that BODY/INFO/HEADER is not used mixed
- move `data->set.include_header` check into Curl_client_write
so protocol handlers no longer have to care
- add special in FTP for `data->set.include_header` for historic,
backward compatible reasons
- move unpausing of client writes from easy.c to sendf.c, so that
code is in one place and can forward flags correctly
Closes#11885
Using the system's provided arpa/tftp.h and optimizing, GCC 12 detects
and reports a stringop-overread warning:
tftpd.c: In function ‘write_behind.isra’:
tftpd.c:485:12: warning: ‘write’ reading between 1 and 2147483647 bytes from a region of size 0 [-Wstringop-overread]
485 | return write(test->ofile, writebuf, count);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from tftpd.c:71:
/usr/include/arpa/tftp.h:58:30: note: source object ‘tu_data’ of size 0
58 | char tu_data[0]; /* data or error string */
| ^~~~~~~
This occurs because writebuf points to this field and the latter
cannot be considered as being of dynamic length because it is not
the last field in the structure. Thus it is bound to its declared
size.
This commit always uses curl's own version of tftp.h where the
target field is last in its structure, effectively avoiding the
warning.
As HAVE_ARPA_TFTP_H is not used anymore, cmake/configure checks for
arpa/tftp.h are removed.
Closes#11897
Previously a build that disabled NTLM and aws-sigv4 would fail to build
since the hmac was disabled, but it is also needed for digest auth.
Follow-up to e92edfbef6Fixes#11890
Reported-by: Aleksander Mazur
Closes#11896
When bearer auth was disabled, the if/else logic got wrong and caused
problems.
Follow-up to e92edfbef6Fixes#11892
Reported-by: Aleksander Mazur
Closes#11895
Not the counter that accumulates all headers over all redirects.
Follow-up to 3ee79c1674
Do a second check for 20 times the limit for the accumulated size for
all headers.
Fixes#11871
Reported-by: Joshix-1 on github
Closes#11872
It was that small on purpose, but this change now adds the null byte to
avoid the error.
Follow-up to 3aa3cc9b05
Reported-by: Dan Fandrich
Ref: #11838Closes#11870
When creating new transfers for doing DoH, they now inherit the debug
settings from the initiating transfer, so that the application can
redirect and handle the verbose output correctly even for the DoH
transfers.
Reported-by: calvin2021y on github
Fixes#11864Closes#11869
When comparing with an empty part, the non-empty one is always
considered greater-than. Previously, the two would be considered equal
which would randomly place empty parts amongst non-empty ones. This
showed as a test 439 failure on Solaris as it uses a different
implementation of qsort() that compares parts differently.
Fixes#11855Closes#11868
Generate alphanumerical random strings.
Prior this change curl used to create random hex strings. This was
mostly okay, but having alphanumerical random strings is better: The
strings have more entropy in the same space.
The MIME multipart boundary used to be mere 64-bits of randomness due
to being 16 hex chars. With these changes the boundary is 22
alphanumerical chars, or little over 130 bits of randomness.
Closes#11838
Plus: reduce the hash table size from 256 to 63. It seems unlikely to
make much of a speed difference for most use cases but saves 1.5KB of
data per instance.
Closes#11862
- Use the ALLCAPS version of the macro so that it is clear a macro is
being called that evaluates the variable multiple times.
- Also capitalize macro isurlpuntcs => ISURLPUNTCS since it evaluates
a variable multiple times.
This is a follow-up to 291d225a which changed Curl_isunreserved into an
alias macro for ISUNRESERVED. The problem is the former is not easily
identified as a macro by the caller, which could lead to a bug.
For example, ISUNRESERVED(*foo++) is easily identifiable as wrong but
Curl_isunreserved(*foo++) is not even though they both are the same.
Closes https://github.com/curl/curl/pull/11846
- Handle user headers in format 'name:' and 'name;' with no value.
The former is used when the user wants to remove an internal libcurl
header and the latter is used when the user actually wants to send a
no-value header in the format 'name:' (note the semi-colon is converted
by libcurl to a colon).
Prior to this change the AWS header import code did not special case
either of those and the generated AWS SignedHeaders would be incorrect.
Reported-by: apparentorder@users.noreply.github.com
Ref: https://curl.se/docs/manpage.html#-H
Fixes https://github.com/curl/curl/issues/11664
Closes https://github.com/curl/curl/pull/11668
- Use CERT_CONTEXT's pbCertEncoded to determine chain order.
CERT_CONTEXT from SECPKG_ATTR_REMOTE_CERT_CONTEXT contains
end-entity/server certificate in pbCertEncoded. We can use this pointer
to determine the order of certificates when enumerating hCertStore using
CertEnumCertificatesInStore.
This change is to help ensure that the ordering of the certificate chain
requested by the user via CURLINFO_CERTINFO has the same ordering on all
versions of Windows.
Prior to this change Schannel certificate order was reversed in 8986df80
but that was later reverted in f540a39b when it was discovered that
Windows 11 22H2 does the reversal on its own.
Ref: https://github.com/curl/curl/issues/9706
Closes https://github.com/curl/curl/pull/11632
In https://www.rfc-editor.org/rfc/rfc2831#section-2.1.2
digest-uri-value should be serv-type "/" host , where host is:
The DNS host name or IP address for the service requested. The
DNS host name must be the fully-qualified canonical name of the
host. The DNS host name is the preferred form; see notes on server
processing of the digest-uri.
Realm may not be the host, so we must specify the host explicitly.
Note this change only affects the non-SSPI digest code. The digest code
used by SSPI builds already uses the hostname to generate the spn.
Ref: https://github.com/curl/curl/issues/11369
Closes https://github.com/curl/curl/pull/11395
Percent encoding needs to be done using uppercase, and most
non-alphanumerical must be percent-encoded.
Fixes#11794
Reported-by: John Walker
Closes#11806
- requests >64K are send in parts to the filter
- fix parsing of the request to assemble it correctly
from several sends
- open a QUIC stream only when the complete request has
been collected
Closes#11815
- we delay loading the x509 store to shorten the handshake time.
However an application callback installed via CURLOPT_SSL_CTX_FUNCTION
may need to have the store loaded and try to manipulate it.
- load the x509 store before invoking the app callback
Fixes#11800
Reported-by: guoxinvmware on github
Cloes #11805
- set CURL_CI for pytest runs in CI environments
- exclude timing sensitive tests from CI runs
- for failed results, list only the log and stat of
the failed transfer
- fix type in http.c comment
Closes#11812
ngtcp2 v0.19.0 made size of `ecn` member of `ngtcp2_pkt_info`
an `uint8_t` (was: `uint32_t`). Adjust our local cast accordingly.
Fixes:
```
./curl/lib/vquic/curl_ngtcp2.c:1912:12: warning: implicit conversion loses integer precision: 'uint32_t' (aka 'unsigned int') to 'uint8_t' (aka 'unsigned char') [-Wimplicit-int-conversion]
pi.ecn = (uint32_t)ecn;
~ ^~~~~~~~~~~~~
```
Also bump ngtcp2, nghttp3 and nghttp2 to their latest versions in our
docs and CI.
Ref: 80447281bb
Ref: https://github.com/ngtcp2/ngtcp2/pull/877Closes#11798
- refs #11342 where errors with git https interactions
were observed
- problem was caused by 1st sends of size larger than 64KB
which resulted in later retries of 64KB only
- limit sending of 1st block to 64KB
- adjust h2/h3 filters to cope with parsing the HTTP/1.1
formatted request in chunks
- introducing Curl_nwrite() as companion to Curl_write()
for the many cases where the sockindex is already known
Fixes#11342 (again)
Closes#11803
Previously this cleared the receiving bit only but in some cases it is
also still sending (like a request-body) when disconnected and neither
direction can continue then.
Fixes#11769
Reported-by: Oleg Jukovec
Closes#11795
- added test cases for various code paths
- fixed handling of blocked write when stream had
been closed inbetween attempts
- re-enabled DEBUGASSERT on send with smaller data size
- in debug builds, environment variables can be set to simulate a slow
network when sending data. cf-socket.c and vquic.c support
* CURL_DBG_SOCK_WBLOCK: percentage of send() calls that should be
answered with a EAGAIN. TCP/UNIX sockets.
This is chosen randomly.
* CURL_DBG_SOCK_WPARTIAL: percentage of data that shall be written
to the network. TCP/UNIX sockets.
Example: 80 means a send with 1000 bytes would only send 800
This is applied to every send.
* CURL_DBG_QUIC_WBLOCK: percentage of send() calls that should be
answered with EAGAIN. QUIC only.
This is chosen randomly.
Closes#11756
`Curl_hyper_stream` needs to distinguish between two kinds of
`HYPER_TASK_EMPTY` tasks: (a) the `foreach` tasks it creates itself, and
(b) background tasks that hyper produces. It does this by recording the
address of any `foreach` task in `hyptransfer->endtask` before pushing
it into the executor, and then comparing that against the address of
tasks later polled out of the executor.
This works right now, but there is no guarantee from hyper that the
addresses are stable. `hyper_executor_push` says "The executor takes
ownership of the task, which should not be accessed again unless
returned back to the user with `hyper_executor_poll`". That wording is a
bit ambiguous but with my Rust programmer's hat on I read it as meaning
the task returned with `hyper_executor_poll` may be conceptually the
same as a task that was pushed, but that there are no other guarantees
and comparing addresses is a bad idea.
This commit instead uses `hyper_task_set_userdata` to mark the `foreach`
task with a `USERDATA_RESP_BODY` value which can then be checked for,
removing the need for `hyptransfer->endtask`. This makes the code look
more like that hyper C API examples, which use userdata for every task
and never look at task addresses.
Closes#11779
`Curl_pgrsSetUploadCounter` should be a passed a total count, not an
increment.
This changes the failing diff for test 579 with hyper from this:
```
Progress callback called with UL 0 out of 0[LF]
-Progress callback called with UL 8 out of 0[LF]
-Progress callback called with UL 16 out of 0[LF]
-Progress callback called with UL 26 out of 0[LF]
-Progress callback called with UL 61 out of 0[LF]
-Progress callback called with UL 66 out of 0[LF]
+Progress callback called with UL 29 out of 0[LF]
```
to this:
```
Progress callback called with UL 0 out of 0[LF]
-Progress callback called with UL 8 out of 0[LF]
-Progress callback called with UL 16 out of 0[LF]
-Progress callback called with UL 26 out of 0[LF]
-Progress callback called with UL 61 out of 0[LF]
-Progress callback called with UL 66 out of 0[LF]
+Progress callback called with UL 40 out of 0[LF]
```
Presumably a step in the right direction.
Closes#11780
Allow overriding the default TLS backend via a CMake setting.
E.g.:
`cmake [...] -DCURL_DEFAULT_SSL_BACKEND=mbedtls`
Accepted values: bearssl, gnutls, mbedtls, openssl, rustls,
schannel, secure-transport, wolfssl
The passed string is baked into the curl/libcurl binaries.
The value is case-insensitive.
We added a similar option to autotools in 2017 via
c7170e20d0.
TODO: Convert to lowercase to improve reproducibility.
Closes#11774
- delete completed TODO from `./CMakeLists.txt`.
- convert a C++ comment to C89 in `./CMake/CurlTests.c`.
- delete duplicate EOLs from EOF.
- add missing EOL at EOF.
- delete whitespace at EOL (except from expected test results).
- convert tabs to spaces.
- convert CRLF EOLs to LF in GHA yaml.
- text casing fixes in `./CMakeLists.txt`.
- fix a codespell typo in `packages/OS400/initscript.sh`.
Closes#11772
- During the check to differentiate between a port and IPv6 address
without brackets, write the binary IPv6 address to an in6_addr.
Prior to this change the binary IPv6 address was erroneously written to
a sockaddr_in6 'sa6' when it should have been written to its in6_addr
member 'sin6_addr'. There's no fallout because no members of 'sa6' are
accessed before it is later overwritten.
Closes https://github.com/curl/curl/pull/11747
When curl wants to connect to a host, it always has a TIMEOUT. The
maximum time it is allowed to spend until a connect is confirmed.
curl will try to connect to each of the IP adresses returned for the
host. Two loops, one for each IP family.
During the connect loop, while curl has more than one IP address left to
try within a single address family, curl has traditionally allowed (time
left/2) for *this* connect attempt. This, to not get stuck on the
initial addresses in case the timeout but still allow later addresses to
get attempted.
This has the downside that when users set a very short timeout and the
host has a large number of IP addresses, the effective result might be
that every attempt gets a little too short time.
This change stop doing the divided-by-two if the total time left is
below a threshold. This threshold is 600 milliseconds.
Closes#11693
When UDP packets get lost this makes for slightly faster retries. This
lower timeout is used by @c-ares itself by default starting next
release.
Closes#11753
Store numerical IPv6 addresses in the alt-svc file with the brackets
present.
Verify with test 437 and 438
Fixes#11737
Reported-by: oliverpool on github
Closes#11743
Some of these changes come from comparing `Curl_http` and
`start_CONNECT`, which are similar, and adding things to them that are
present in one and missing in another.
The most important changes:
- In `start_CONNECT`, add a missing `hyper_clientconn_free` call on the
happy path.
- In `start_CONNECT`, add a missing `hyper_request_free` on the error
path.
- In `bodysend`, add a missing `hyper_body_free` on an early-exit path.
- In `bodysend`, remove an unnecessary `hyper_body_free` on a different
error path that would cause a double-free.
https://docs.rs/hyper/latest/hyper/ffi/fn.hyper_request_set_body.html
says of `hyper_request_set_body`: "This takes ownership of the
hyper_body *, you must not use it or free it after setting it on the
request." This is true even if `hyper_request_set_body` returns an
error; I confirmed this by looking at the hyper source code.
Other changes are minor but make things slightly nicer.
Closes#11745
We've seen errors left in the OpenSSL error queue (specifically,
"shutdown while in init") by adding some logging it revealed that the
source was this file.
Since we call SSL_read and SSL_shutdown here, but don't check the return
code for an error, we should clear the OpenSSL error queue in case one
was raised.
This didn't affect curl because we call ERR_clear_error before every
write operation (a0dd9df9ab), but when
libcurl is used in a process with other OpenSSL users, they may detect
an OpenSSL error pushed by libcurl's SSL_shutdown as if it was their
own.
Co-authored-by: Satana de Sant'Ana <satana@skylittlesystem.org>
Closes#11736
There is a `hyper_clientconn_free` call on the happy path, but not one
on the error path. This commit adds one.
Fixes the second memory leak reported by Valgrind in #10803.
Fixes#10803Closes#11729
A request created with `hyper_request_new` must be consumed by either
`hyper_clientconn_send` or `hyper_request_free`.
This is not terrifically clear from the hyper docs --
`hyper_request_free` is documented only with "Free an HTTP request if
not going to send it on a client" -- but a perusal of the hyper code
confirms it.
This commit adds a `hyper_request_free` to the `error:` path in
`Curl_http` so that the request is consumed when an error occurs after
the request is created but before it is sent.
Fixes the first memory leak reported by Valgrind in #10803.
Closes#11729
In this situation, only part of the data has been sent before aborting
so the connection is no longer usable.
Assisted-by: Jay Satiro
Fixes#11678Closes#11679
When the legacy CURLOPT_HTTPPOST option is used, it gets converted into
the modem mimpost struct at first use. This data is (now) kept for the
entire transfer and not only per single HTTP request. This re-enables
rewind in the beginning of the second request instead of in end of the
first, as brought by 1b39731.
The request struct is per-request data only.
Extend test 650 to verify.
Fixes#11680
Reported-by: yushicheng7788 on github
Closes#11682
- bring bearssl handshake times down from +200ms down to other TLS backends
- vtls: improve generic get_select_socks() implementation
- tests: provide Apache with a suitable ssl session cache
Closes#11675
In parallel with ngtcp2, quiche also offers the `quiche_conn_on_timeout`
interface for the application to invoke upon timer
expiration. Therefore, invoking the `on_timeout` function of the
Connection is crucial to ensure seamless functionality of quiche with
timeout events.
Closes#11654
In order to get Negotiate (SPNEGO) authentication to work in HTTP you
used to be required to provide a (fake) user name (this concerned both
curl and the lib) because the code wrongly only considered
authentication if there was a user name provided, as in:
curl -u : --negotiate https://example.com/
This commit leverages the `struct auth` want member to figure out if the
user enabled CURLAUTH_NEGOTIATE, effectively removing the requirement of
setting a user name both in curl and the lib.
Signed-off-by: Marin Hannache <git@mareo.fr>
Reported-by: Enrico Scholz
Fixes https://sourceforge.net/p/curl/bugs/440/Fixes#1161Closes#9047
2ebc74c36a#11546 introduced sharing
libcurl objects for shared and static targets.
The above automatically enabled for Windows builds, with an option to
disable with `SHARE_LIB_OBJECT=OFF`.
This patch extend this feature to all platforms as a manual option.
You can enable it by setting `SHARE_LIB_OBJECT=ON`. Then shared objects
are built in PIC mode, meaning the static lib will also have PIC code.
[EXPERIMENTAL]
Closes#11627
OpenSSL 1.1.1 defines this macro, but no ealier version, or any of the
popular forks (yet). Use the macro itself to detect its presence,
replacing the hard-wired fork-specific conditions.
This way the feature will enable automatically when forks implement it,
while also shorter and possibly requiring less future maintenance.
Follow-up to 94241a9e78#6721
Reviewed-by: Jay Satiro
Closes#11617
Make sure that context initialization during hash setup works to avoid
going forward with the risk of a null pointer dereference.
Reported-by: Philippe Antoine on HackerOne
Assisted-by: Jay Satiro
Assisted-by: Daniel Stenberg
Closes#11614
LibreSSL 2.7.0 (2018-03-21) introduced automatic initialization,
`OPENSSL_init_ssl()` function and deprecated the old, manual init
method, as seen in OpenSSL 1.1.0. Switch to the modern method when
available.
Ref: https://ftp.openbsd.org/pub/OpenBSD/LibreSSL/libressl-2.7.0-relnotes.txt
Reviewed-by: Daniel Stenberg
Closes#11611
We remove support for building curl with gskit.
- This is a niche TLS library, only running on some IBM systems
- no regular curl contributors use this backend
- no CI builds use or verify this backend
- gskit, or the curl adaption for it, lacks many modern TLS features
making it an inferior solution
- build breakages in this code take weeks or more to get detected
- fixing gskit code is mostly done "flying blind"
This removal has been advertized in DEPRECATED in Jan 2, 2023 and it has
been mentioned on the curl-library mailing list.
It could be brought back, this is not a ban. Given proper effort and
will, gskit support is welcome back into the curl TLS backend family.
Closes#11460
This is a bad header fold but since the popular browsers accept this
violation, so does curl now. Unless built with hyper.
Add test 1473 to verify and adjust test 2306.
Reported-by: junsik on github
Fixes#11605Closes#11607