- use the new `Curl_xfer_write_resp()` to write incoming responses
directly to the client
- eliminates `stream->recvbuf`
- memory consumption on parallel transfers minimized
Closes#12828
- let `multi_getsock()` initialize the pollset in what the
transfer state requires in regards to SEND/RECV
- change connection filters `adjust_pollset()` implementation
to react on the presence of POLLIN/-OUT in the pollset and
no longer check CURL_WANT_SEND/CURL_WANT_RECV
- cf-socket will no longer add POLLIN on its own
- http2 and http/3 filters will only do adjustments if the
passed pollset wants to POLLIN/OUT for the transfer on
the socket. This is similar to the HTTP/2 proxy filter
and works in stacked filters.
Closes#12640
do not add a socket for POLLIN when the transfer does not want to send
(for example is paused).
Follow-up to 47f5b1a
Reported-by: bubbleguuum on github
Fixes#12632Closes#12633
- there seems to be a code path that cleans up easy handles without
triggering DONE or DETACH events to the connection filters. This
would explain wh nghttp2 still holds stream user data
- add GOOD check to easy handle used in on_close_callback to
prevent crashes, ASSERTs in debug builds.
- NULL the stream user data early before submitting RST
- add checks in on_stream_close() to identify UNGOOD easy handles
Reported-by: Hans-Christian Egtvedt
Fixes#10936Closes#12562
- use `data->state.dselect_bits` everywhere instead
- remove `bool *comeback` parameter as non-zero
`data->state.dselect_bits` will indicate that IO is
incomplete.
Closes#12512
- refs #12356 where a UAF is reported when closing a connection
with a stream whose easy handle was cleaned up already
- handle DETACH events same as DONE events in h2/h3 filters
Fixes#12356
Reported-by: Paweł Wegner
Closes#12364
GCC 14 introduces a new -Walloc-size included in -Wextra which gives:
```
src/tool_operate.c: In function ‘add_per_transfer’:
src/tool_operate.c:213:5: warning: allocation of insufficient size ‘1’ for type ‘struct per_transfer’ with size ‘480’ [-Walloc-size]
213 | p = calloc(sizeof(struct per_transfer), 1);
| ^
src/var.c: In function ‘addvariable’:
src/var.c:361:5: warning: allocation of insufficient size ‘1’ for type ‘struct var’ with size ‘32’ [-Walloc-size]
361 | p = calloc(sizeof(struct var), 1);
| ^
```
The calloc prototype is:
```
void *calloc(size_t nmemb, size_t size);
```
So, just swap the number of members and size arguments to match the
prototype, as we're initialising 1 struct of size `sizeof(struct
...)`. GCC then sees we're not doing anything wrong.
Closes#12292
Connection filter had a `get_select_socks()` method, inspired by the
various `getsocks` functions involved during the lifetime of a
transfer. These, depending on transfer state (CONNECT/DO/DONE/ etc.),
return sockets to monitor and flag if this shall be done for POLLIN
and/or POLLOUT.
Due to this design, sockets and flags could only be added, not
removed. This led to problems in filters like HTTP/2 where flow control
prohibits the sending of data until the peer increases the flow
window. The general transfer loop wants to write, adds POLLOUT, the
socket is writeable but no data can be written.
This leads to cpu busy loops. To prevent that, HTTP/2 did set the
`SEND_HOLD` flag of such a blocked transfer, so the transfer loop cedes
further attempts. This works if only one such filter is involved. If a
HTTP/2 transfer goes through a HTTP/2 proxy, two filters are
setting/clearing this flag and may step on each other's toes.
Connection filters `get_select_socks()` is replaced by
`adjust_pollset()`. They get passed a `struct easy_pollset` that keeps
up to `MAX_SOCKSPEREASYHANDLE` sockets and their `POLLIN|POLLOUT`
flags. This struct is initialized in `multi_getsock()` by calling the
various `getsocks()` implementations based on transfer state, as before.
After protocol handlers/transfer loop have set the sockets and flags
they want, the `easy_pollset` is *always* passed to the filters. Filters
"higher" in the chain are called first, starting at the first
not-yet-connection one. Each filter may add sockets and/or change
flags. When all flags are removed, the socket itself is removed from the
pollset.
Example:
* transfer wants to send, adds POLLOUT
* http/2 filter has a flow control block, removes POLLOUT and adds
POLLIN (it is waiting on a WINDOW_UPDATE from the server)
* TLS filter is connected and changes nothing
* h2-proxy filter also has a flow control block on its tunnel stream,
removes POLLOUT and adds POLLIN also.
* socket filter is connected and changes nothing
* The resulting pollset is then mixed together with all other transfers
and their pollsets, just as before.
Use of `SEND_HOLD` is no longer necessary in the filters.
All filters are adapted for the changed method. The handling in
`multi.c` has been adjusted, but its state handling the the protocol
handlers' `getsocks` method are untouched.
The most affected filters are http/2, ngtcp2, quiche and h2-proxy. TLS
filters needed to be adjusted for the connecting handshake read/write
handling.
No noticeable difference in performance was detected in local scorecard
runs.
Closes#11833
Getting nghttp2's error message helps users understand what's going
on. For example when the connection is brought down due a forbidden
header is used - as that header is then not displayed by curl itself.
Example:
curl: (92) Invalid HTTP header field was received: frame type: 1,
stream: 1, name: [upgrade], value: [h2,h2c]
Ref: #12172Closes#12179
- fold the code to convert dynhds to the nghttp2 structs
into a dynhds internal method
- saves code duplication
- pacifies compiler analyzers
Closes#12097
populate_binsettings now returns a negative value on error, instead of a
huge positive value. Both places which call this function have been
updated to handle this change in its contract.
The way populate_binsettings had been used prior to this change the huge
positive values -- due to signed->unsigned conversion of the potentially
negative result of nghttp2_pack_settings_payload which returns negative
values on error -- are not possible. But only because http2.c currently
always provides a large enough output buffer and provides H2 SETTINGS
IVs which pass the verification logic inside nghttp2. If the
verification logic were to change or if http2.c started passing in more
IVs without increasing the output buffer size, the overflow could become
reachable, and libcurl/curl might start leaking memory contents to
servers/proxies...
Closes#12101
- answer HTTP/2 streams refused via a GOAWAY from the server to
respond with CURLE_RECV_ERROR in order to trigger a retry
on another connection
Reported-by: black-desk on github
Ref #11859Closes#12054
- If SSL shutdown is not finished then make an additional call to
SSL_read to gather additional tracing.
- Fix http2 and h2-proxy filters to forward do_close() calls to the next
filter.
For example h2 and SSL shutdown before and after this change:
Before:
Curl_conn_close -> cf_hc_close -> Curl_conn_cf_discard_chain ->
ssl_cf_destroy
After:
Curl_conn_close -> cf_hc_close -> cf_h2_close -> cf_setup_close ->
ssl_cf_close
Note that currently the tracing does not show output on the connection
closure handle. Refer to discussion in #11878.
Ref: https://github.com/curl/curl/discussions/11878
Closes https://github.com/curl/curl/pull/11858
- refs #11342 where errors with git https interactions
were observed
- problem was caused by 1st sends of size larger than 64KB
which resulted in later retries of 64KB only
- limit sending of 1st block to 64KB
- adjust h2/h3 filters to cope with parsing the HTTP/1.1
formatted request in chunks
- introducing Curl_nwrite() as companion to Curl_write()
for the many cases where the sockindex is already known
Fixes#11342 (again)
Closes#11803
- added test cases for various code paths
- fixed handling of blocked write when stream had
been closed inbetween attempts
- re-enabled DEBUGASSERT on send with smaller data size
- in debug builds, environment variables can be set to simulate a slow
network when sending data. cf-socket.c and vquic.c support
* CURL_DBG_SOCK_WBLOCK: percentage of send() calls that should be
answered with a EAGAIN. TCP/UNIX sockets.
This is chosen randomly.
* CURL_DBG_SOCK_WPARTIAL: percentage of data that shall be written
to the network. TCP/UNIX sockets.
Example: 80 means a send with 1000 bytes would only send 800
This is applied to every send.
* CURL_DBG_QUIC_WBLOCK: percentage of send() calls that should be
answered with EAGAIN. QUIC only.
This is chosen randomly.
Closes#11756
- check in h2 filter recv that stream actually exists
and return error if not
- add test for parallel, extreme h2 upgrades that fail if
connections get reused before fully switched
- add h2 upgrade upload test just for completeness
Closes#11563
HTTP/1 connections that are upgraded to HTTP/2 should not be picked up
for reuse and multiplexing by other handles until the 101 switching
process is completed.
Lots-of-debgging-by: Stefan Eissing
Reported-by: Richard W.M. Jones
Bug: https://curl.se/mail/lib-2023-07/0045.htmlCloses#11557
- not clear how this triggers and it blocks OSSFuzz testing other
things. Since we handle the case with an error return, disabling the
assertion for now seems the best way forward.
Fixes#11500Closes#11519
- adding tests using very large passwords in auth
- fixes general http sending to treat h3 like h2, and
not like http1.1
- eliminate H2_HEADER max definitions and use the commmon
DYN_HTTP_REQUEST everywhere, different limits do not help
- fix http2 handling of requests denied by nghttp2 on send
to immediately report the refused stream
Closes#11509
- refs #11426 where spurious stalls on large POST requests
are reported
- the issue seems to involve the following
* first stream on connection adds up to 64KB of POST
data, which is the max default HTTP/2 stream window size
transfer is set to HOLD
* initial SETTINGS from server arrive, enlarging the stream
window. But no WINDOW_UPDATE is received.
* curl stalls
- the fix un-HOLDs a stream on receiving SETTINGS, not
relying on a WINDOW_UPDATE from lazy servers
Closes#11450
- fix HTTP/2 check to not declare a connection dead when
the read attempt results in EAGAIN
- add H2-PROXY alive check as for HTTP/2 that was missing
and is needed
- add attach/detach around Curl_conn_is_alive() and remove
these in filter methods
- add checks for number of connections used in some test_10
proxy tunneling tests
Closes#11368
- refs #11357, where it was reported that HTTP/1.1 downgrades
no longer works
- fixed with suggested change
- added test_05_03 and a new handler in the curltest module
to reproduce that downgrades work
Fixes#11357Closes#11362
Reported-by: Jay Satiro
- fixes#11242 where 100% CPU on uploads was reported
- fixes possible stalls on last part of a request body when
that information could not be fully send on the connection
due to an EAGAIN
- applies the same EGAIN handling to HTTP/2 proxying
Reported-by: Sergey Alirzaev
Fixed#11242Closes#11342
- Use CURL_FORMAT_CURL_OFF_T where %zd was erroneously used for some
curl_off_t variables.
- Use %zu where %zd was erroneously used for some size_t variables.
Prior to this change some of the Windows CI tests were failing because
in Windows 32-bit targets have a 32-bit size_t and a 64-bit curl_off_t.
When %zd was used for some curl_off_t variables then only the lower
32-bits was read and the upper 32-bits would be read for part or all of
the next specifier.
Fixes https://github.com/curl/curl/issues/11327
Closes https://github.com/curl/curl/pull/11321
`max_recv_speed` is `curl_off_t`, so using `size_t` might result in
-Wconversion GCC warnings for 32-bit `size_t`. Visible in the NetBSD
ARM autobuilds.
Closes https://github.com/curl/curl/pull/11312
- add an `id` long to Curl_easy, -1 on init
- once added to a multi (or its own multi), it gets
a non-negative number assigned by the connection cache
- `id` is unique among all transfers using the same
cache until reaching LONG_MAX where it will wrap
around. So, not unique eternally.
- CURLINFO_CONN_ID returns the connection id attached to
data or, if none present, data->state.lastconnect_id
- variables and type declared in tool for write out
Closes#11185
Aka "jumbo" or "amalgamation" builds. It means to compile all sources
per target as a single C source. This is experimental.
You can enable it by passing `-DCMAKE_UNITY_BUILD=ON` to cmake.
It requires CMake 3.16 or newer.
It makes builds (much) faster, allows for better optimizations and tends
to promote less ambiguous code.
Also add a new AppVeyor CI job and convert an existing one to use
"unity" mode (one MSVC, one MinGW), and enable it for one macOS CI job.
Fix related issues:
- add missing include guard to `easy_lock.h`.
- rename static variables and functions (and a macro) with names reused
across sources, or shadowed by local variables.
- add an `#undef` after use.
- add a missing `#undef` before use.
- move internal definitions from `ftp.h` to `ftp.c`.
- `curl_memory.h` fixes to make it work when included repeatedly.
- stop building/linking curlx bits twice for a static-mode curl tool.
These caused doubly defined symbols in unity builds.
- silence missing extern declarations compiler warning for ` _CRT_glob`.
- fix extern declarations for `tool_freq` and `tool_isVistaOrGreater`.
- fix colliding static symbols in debug mode: `debugtime()` and
`statename`.
- rename `ssl_backend_data` structure to unique names for each
TLS-backend, along with the `ssl_connect_data` struct member
referencing them. This required adding casts for each access.
- add workaround for missing `[P]UNICODE_STRING` types in certain Windows
builds when compiling `lib/ldap.c`. To support "unity" builds, we had
to enable `SCHANNEL_USE_BLACKLISTS` for Schannel (a Windows
`schannel.h` option) _globally_. This caused an indirect inclusion of
Windows `schannel.h` from `ldap.c` via `winldap.h` to have it enabled
as well. This requires `[P]UNICODE_STRING` types, which is apperantly
not defined automatically (as seen with both MSVS and mingw-w64).
This patch includes `<subauth.h>` to fix it.
Ref: https://github.com/curl/curl/runs/13987772013
Ref: https://dev.azure.com/daniel0244/curl/_build/results?buildId=15827&view=logs&jobId=2c9f582d-e278-56b6-4354-f38a4d851906&j=2c9f582d-e278-56b6-4354-f38a4d851906&t=90509b00-34fa-5a81-35d7-5ed9569d331c
- tweak unity builds to compile `lib/memdebug.c` separately in memory
trace builds to avoid PP confusion.
- force-disable unity for test programs.
- do not compile and link libcurl sources to libtests _twice_ when libcurl
is built in static mode.
KNOWN ISSUES:
- running tests with unity builds may fail in cases.
- some build configurations/env may not compile in unity mode. E.g.:
https://ci.appveyor.com/project/curlorg/curl/builds/47230972/job/51wfesgnfuauwl8q#L250
Ref: https://github.com/libssh2/libssh2/issues/1034
Ref: https://cmake.org/cmake/help/latest/prop_tgt/UNITY_BUILD.html
Ref: https://en.wikipedia.org/wiki/Unity_buildCloses#11095
- leave transfer loop when --limit-rate is in effect and has
been received
- adjust stream window size to --limit-rate plus some slack
to make the server observe the pacing we want
- add test case to confirm behaviour
Closes#11115
- doing a POST with `--digest` does an override on the initial request
with `Content-Length: 0`, but the http2 filter was unaware of that
and expected the originally request body. It did therefore not
send a final DATA frame with EOF flag to the server.
- The fix overrides any initial notion of post size when the `done_send`
event is triggered by the transfer loop, leading to the EOF that
is necessary.
- refs #11194. The fault did not happen in testing, as Apache httpd
never tries to read the request body of the initial request,
sends the 401 reply and closes the stream. The server used in the
reported issue however tried to read the EOF and timed out on the
request.
Reported-by: Aleksander Mazur
Fixes#11194
Cloes #11200
- refs #11157 and #11175 where uploads get stuck or lead to RST streams
- fixes our h2 send behaviour to continue sending in the nghttp2 session
as long as it wants to. This will empty our send buffer as long as
the remote stream/connection window allows.
- in case the window is exhausted, the data remaining in the send buffer
will wait for a WINDOW_UPDATE from the server. Which is a socket event
that engages our transfer loop again
- the problem in the issue was that we did not exhaust the window, but
left data in the sendbuffer and no further socket events did happen.
The server was just waiting for us to send more.
- relatedly, there was an issue fixed that closing a stream with KEEP_HOLD
set kept the transfer from shutting down - as it should have - leading
to a timeout.
Closes#11176
Make send buffer smaller to have progress and "upload done" reporting
closer to reality. Fix handling of send "drain" condition to no longer
trigger once the transfer loop reports it is done sending. Also do not
trigger the send "drain" on RST streams.
Background:
- a upload stall was reported in #11157 that timed out
- test_07_33a reproduces a problem with such a stall if the
server 404s the request and RSTs the stream.
- test_07_33b verifies a successful PUT, using the parameters
from #11157 and checks success
Ref: #11157Closes#11165
This works around #11138, by doubling the limit, and should be a
relatively safe fix.
Ideally the buffer would grow as needed and there would be no need for a
limit? But that might be follow-up material.
Fixes#11138Closes#11139
Out of 415 labels throughout the code base, 86 of those labels were
not at the start of the line. Which means labels always at the start of
the line is the favoured style overall with 329 instances.
Out of the 86 labels not at the start of the line:
* 75 were indented with the same indentation level of the following line
* 8 were indented with exactly one space
* 2 were indented with one fewer indentation level then the following
line
* 1 was indented with the indentation level of the following line minus
three space (probably unintentional)
Co-Authored-By: Viktor Szakats
Closes#11134
- nghttp2 does not free connection level window flow for
aborted streams
- when closing transfers, make sure that any buffered
response data is "given back" to the flow control window
- add tests test_02_22 and test_02_23 to reproduce
Closes#11052
- fixes stalled connections
- Make the connection window large enough, so that there is
some room left should 99/100 streams be PAUSED by the application
Reported-by: Paweł Wegner
Fixes#10988Closes#11043