Deprecation and removal of codeset conversion support from the library
have released the strict need for an early binding of mime structures to
an easy handle (https://github.com/curl/curl/commit/2610142).
This constraint currently forces to create the handle before the mime
structure and the latter cannot be attached to another handle once
created (see https://curl.se/mail/lib-2022-08/0027.html).
This commit removes the handle pointers from the mime structures
allowing more flexibility on their use.
When an easy handle is duplicated, bound mime structures must however
still be duplicated too as their components hold send-time dynamic
information.
Closes#9927
When using the option CURLOPT_IGNORE_CONTENT_LENGTH (set.ignorecl in
code) to support growing files in FTP, the code should ignore the
initial size it gets from the server as this will not be the final size
of the file. This is done in ftp_state_quote() to prevent a size request
being issued in the initial sequence. However, in a later call to
ftp_state_get_resp() the code attempts to get the size of the content
again if it doesn't already have it, by parsing the response from the
RETR request. This fix prevents this parsing of the response to get the
size when the set.ignorecl option is set. This should maintain the size
value as -1, unknown, in this situation.
Closes#9772
- `Curl_ssl_get_config()` now returns the first config if no SSL proxy
filter is active
- socket filter starts connection only on first invocation of its
connect method
Fixes#9982Closes#9983
`Curl_output_aws_sigv4()` doesn't always have the whole payload in
memory to generate a real payload hash. this commit allows the user to
pass in a header like `x-amz-content-sha256` to provide their desired
payload hash
some services like s3 require this header, and may support other values
like s3's `UNSIGNED-PAYLOAD` and `STREAMING-AWS4-HMAC-SHA256-PAYLOAD`
with special semantics. servers use this header's value as the payload
hash during signature validation, so it must match what the client uses
to generate the signature
CURLOPT_AWS_SIGV4.3 now describes the content-sha256 interaction
Signed-off-by: Casey Bodley <cbodley@redhat.com>
Closes#9804
This makes a big difference for cases when the rewind is not actually
necessary to perofm (for example HTTP response code 301 converts to GET)
and therefore the rewind can be avoided. In particular for situations
when that rewind fails, for example when reading from a pipe or similar.
Reported-by: Ali Utku Selen
Fixes#9735Closes#9958
In non-IPv6 builds the conn parameter is unused, and compilers which
run with "-Werror=unused-parameter" (or similar) warnings turned on
fails to build. Below is an excerpt from a CI job:
vtls/openssl.c: In function ‘Curl_ossl_verifyhost’:
vtls/openssl.c:2016:75: error: unused parameter ‘conn’ [-Werror=unused-parameter]
2016 | CURLcode Curl_ossl_verifyhost(struct Curl_easy *data, struct connectdata *conn,
| ~~~~~~~~~~~~~~~~~~~~^~~~
Closes: #9970
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Commit 3b16575ae9 removed support for
building on Novell Netware, but a few leftover traces remained. This
removes the last bits.
Closes: #9966
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
- almost all backend calls pass the Curl_cfilter intance instead of
connectdata+sockindex
- ssl_connect_data is remove from struct connectdata and made internal
to vtls
- ssl_connect_data is allocated in the added filter, kept at cf->ctx
- added function to let a ssl filter access its ssl_primary_config and
ssl_config_data this selects the propert subfields in conn and data,
for filters added as plain or proxy
- adjusted all backends to use the changed api
- adjusted all backends to access config data via the exposed
functions, no longer using conn or data directly
cfilter renames for clear purpose:
- methods `Curl_conn_*(data, conn, sockindex)` work on the complete
filter chain at `sockindex` and connection `conn`.
- methods `Curl_cf_*(cf, ...)` work on a specific Curl_cfilter
instance.
- methods `Curl_conn_cf()` work on/with filter instances at a
connection.
- rebased and resolved some naming conflicts
- hostname validation (und session lookup) on SECONDARY use the same
name as on FIRST (again).
new debug macros and removing connectdata from function signatures where not
needed.
adapting schannel for new Curl_read_plain paramter.
Closes#9919
Update bare GNU Make `Makefile.m32` to:
- Move objects into a subdirectory.
- Add support for MS-DOS. Tested with DJGPP.
- Add support for Watt-32 (on MS-DOS).
- Add support for AmigaOS.
- Rename `Makefile.m32` to `Makefile.mk`
- Replace `ARCH` with `TRIPLET`.
- Build `tool_hugehelp.c` proper (when tools are available).
- Drop MS-DOS compatibility macro `USE_ZLIB` (replaced by `HAVE_LIBZ`)
- Add support for `ZLIB_LIBS` to override `-lz`.
- Omit object files when building examples.
- Default `CC` to `gcc` once again, for convenience. (Caveat: compiler
name `cc` cannot be set now.)
- Set `-DCURL_NO_OLDIES` for examples, like autotools does.
- Delete `makefile.dj` files. Notice the configuration details and
defaults are not retained with the new method.
- Delete `makefile.amiga` files. A successful build needs a few custom
options. We're also not retaining all build details from the existing
Amiga make files.
- Rename `Makefile.m32` to `Makefile.mk` to reflect that they are not
Windows/MinGW32-specific anymore.
- Add support for new `CFG` options: `-map`, `-debug`, `-trackmem`
- Set `-DNDEBUG` by default.
- Allow using `-DOS=...` in all `lib/config-*.h` headers, syncing this
with `config-win32.h`.
- Look for zlib parts in `ZLIB_PATH/include` and `ZLIB_PATH/lib`
instead of bare `ZLIB_PATH`.
Note that existing build configurations for MS-DOS and AmigaOS likely
become incompatible with this change.
Example AmigaOS configuration:
```
export CROSSPREFIX=/opt/amiga/bin/m68k-amigaos-
export CC=gcc
export CPPFLAGS='-DHAVE_PROTO_BSDSOCKET_H'
export CFLAGS='-mcrt=clib2'
export LDFLAGS="${CFLAGS}"
export LIBS='-lnet -lm'
make -C lib -f Makefile.mk
make -C src -f Makefile.mk
```
Example MS-DOS configuration:
```
export CROSSPREFIX=/opt/djgpp/bin/i586-pc-msdosdjgpp-
export WATT_PATH=/opt/djgpp/net/watt
export ZLIB_PATH=/opt/djgpp
export OPENSSL_PATH=/opt/djgpp
export OPENSSL_LIBS='-lssl -lcrypt'
export CFG=-zlib-ssl
make -C lib -f Makefile.mk
make -C src -f Makefile.mk
```
Closes#9764
- Adding Curl_conn_is_ip_connected() to check if network connectivity
has been reached
- having ftp wait for network connectivity before proceeding with
transfers.
Fixes test failures 1631 and 1632 with hyper.
Closes#9952
Prior to this change Curl_read_plain would attempt to read the
socket directly. On Windows that's a problem because recv data may be
cached by libcurl and that data is only drained using Curl_recv_plain.
Rather than rewrite Curl_read_plain to handle cached recv data, I
changed it to wrap Curl_recv_plain, in much the same way that
Curl_write_plain already wraps Curl_send_plain.
Curl_read_plain -> Curl_recv_plain
Curl_write_plain -> Curl_send_plain
This fixes a bug in the schannel backend where decryption of arbitrary
TLS records fails because cached recv data is never drained. We send
data (TLS records formed by Schannel) using Curl_write_plain, which
calls Curl_send_plain, and that may do a recv-before-send
("pre-receive") to cache received data. The code calls Curl_read_plain
to read data (TLS records from the server), which prior to this change
did not call Curl_recv_plain and therefore cached recv data wasn't
retrieved, resulting in malformed TLS records and decryption failure
(SEC_E_DECRYPT_FAILURE).
The bug has only been observed during Schannel TLS 1.3 handshakes. Refer
to the issue and PR for more information.
--
This is take 2 of the original fix. It preserves the original behavior
of Curl_read_plain to write 0 to the bytes read parameter on error,
since apparently some callers expect that (SOCKS tests were hanging).
The original fix which landed in 12e1def5 and was later reverted in
18383fbf failed to work properly because it did not do that.
Also, it changes Curl_write_plain the same way to complement
Curl_read_plain, and it changes Curl_send_plain to return -1 instead of
0 on CURLE_AGAIN to complement Curl_recv_plain.
Behavior on error with these changes:
Curl_recv_plain returns -1 and *code receives error code.
Curl_send_plain returns -1 and *code receives error code.
Curl_read_plain returns error code and *n (bytes read) receives 0.
Curl_write_plain returns error code and *written receives 0.
--
Ref: https://github.com/curl/curl/issues/9431#issuecomment-1312420361
Assisted-by: Joel Depooter
Reported-by: Egor Pugin
Fixes https://github.com/curl/curl/issues/9431
Closes https://github.com/curl/curl/pull/9949
Follow-up to dafdb20a26
HTTP/3 needs a special filter chain, since it does the TLS handling
itself. This PR adds special setup handling in the HTTP protocol handler
that takes are of it.
When a handler, in its setup method, installs filters, the default
behaviour for managing the filter chain is overridden.
Reported-by: Karthikdasari0423 on github
Fixes#9931Closes#9945
Prior to this change Curl_read_plain would attempt to read the
socket directly. On Windows that's a problem because recv data may be
cached by libcurl and that data is only drained using Curl_recv_plain.
Rather than rewrite Curl_read_plain to handle cached recv data, I
changed it to wrap Curl_recv_plain, in much the same way that
Curl_write_plain already wraps Curl_send_plain.
Curl_read_plain -> Curl_recv_plain
Curl_write_plain -> Curl_send_plain
This fixes a bug in the schannel backend where decryption of arbitrary
TLS records fails because cached recv data is never drained. We send
data (TLS records formed by Schannel) using Curl_write_plain, which
calls Curl_send_plain, and that may do a recv-before-send
("pre-receive") to cache received data. The code calls Curl_read_plain
to read data (TLS records from the server), which prior to this change
did not call Curl_recv_plain and therefore cached recv data wasn't
retrieved, resulting in malformed TLS records and decryption failure
(SEC_E_DECRYPT_FAILURE).
The bug has only been observed during Schannel TLS 1.3 handshakes. Refer
to the issue and PR for more information.
Ref: https://github.com/curl/curl/issues/9431#issuecomment-1312420361
Assisted-by: Joel Depooter
Reported-by: Egor Pugin
Fixes https://github.com/curl/curl/issues/9431
Closes https://github.com/curl/curl/pull/9904
Regression: in commit 53bcf55 we moved the IDN conversion calls to
happen before the HSTS checks. But the HSTS checks are only done on the
server host name, not the proxy names. By moving the proxy name IDN
conversions, we accidentally broke the verbose output showing the proxy
name.
This change moves back the IDN conversions for the proxy names to the
place in the code path they were before 53bcf55.
Reported-by: Andy Stamp
Fixes#9937Closes#9939
Field feature_names contains a null-terminated sorted array of feature
names. Bitmask field features is deprecated.
Documentation is updated. Test 1177 and tests/version-scan.pl updated to
match new documentation format and extended to check feature names too.
Closes#9583
- buffers updated correctly when handling partial frames
- callbacks no longer invoked for incomplete payload data of 0 length
- curl_ws_recv no longer returns with 0 length partial payload
Closes#9890
The previously set default value of 8 (64-bit) is only correct for
mingw-w64 and only when we set `_FILE_OFFSET_BITS` to 64 (the default
when building curl). For MSVC, old MinGW and other Windows compilers,
the correct value is 4 (32-bit). Adjust condition accordingly. Also
drop the manual override option.
Regression in 7.86.0 (from 68fa9bf3f5)
Bug: https://github.com/curl/curl/pull/9712#issuecomment-1307330551
Reported-by: Peter Piekarski
Reviewed-by: Jay Satiro
Closes#9872
This struct field MUST remain what the application set it to, so that
handle reuse and handle duplication work.
Instead, the request state bit 'no_body' is introduced for code flows
that need to change this in run-time.
Closes#9888
- general construct/destroy in connectdata
- default implementations of callback functions
- connect: cfilters for connect and accept
- socks: cfilter for socks proxying
- http_proxy: cfilter for http proxy tunneling
- vtls: cfilters for primary and proxy ssl
- change in general handling of data/conn
- Curl_cfilter_setup() sets up filter chain based on data settings,
if none are installed by the protocol handler setup
- Curl_cfilter_connect() boot straps filters into `connected` status,
used by handlers and multi to reach further stages
- Curl_cfilter_is_connected() to check if a conn is connected,
e.g. all filters have done their work
- Curl_cfilter_get_select_socks() gets the sockets and READ/WRITE
indicators for multi select to work
- Curl_cfilter_data_pending() asks filters if the have incoming
data pending for recv
- Curl_cfilter_recv()/Curl_cfilter_send are the general callbacks
installed in conn->recv/conn->send for io handling
- Curl_cfilter_attach_data()/Curl_cfilter_detach_data() inform filters
and addition/removal of a `data` from their connection
- adding vtl functions to prevent use of Curl_ssl globals directly
in other parts of the code.
Reviewed-by: Daniel Stenberg
Closes#9855
Unlike `CONNECT`, currently we don't keep track whether `PROXY` is
already sent or not. This causes `PROXY` header to be sent twice during
`MSTATE_TUNNELING` and `MSTATE_PROTOCONNECT`.
Closes#9878Fixes#9442
Adds a new option to control the maximum time that a cached
certificate store may be retained for.
Currently only the OpenSSL backend implements support for
caching certificate stores.
Closes#9620