Connection filter had a `get_select_socks()` method, inspired by the
various `getsocks` functions involved during the lifetime of a
transfer. These, depending on transfer state (CONNECT/DO/DONE/ etc.),
return sockets to monitor and flag if this shall be done for POLLIN
and/or POLLOUT.
Due to this design, sockets and flags could only be added, not
removed. This led to problems in filters like HTTP/2 where flow control
prohibits the sending of data until the peer increases the flow
window. The general transfer loop wants to write, adds POLLOUT, the
socket is writeable but no data can be written.
This leads to cpu busy loops. To prevent that, HTTP/2 did set the
`SEND_HOLD` flag of such a blocked transfer, so the transfer loop cedes
further attempts. This works if only one such filter is involved. If a
HTTP/2 transfer goes through a HTTP/2 proxy, two filters are
setting/clearing this flag and may step on each other's toes.
Connection filters `get_select_socks()` is replaced by
`adjust_pollset()`. They get passed a `struct easy_pollset` that keeps
up to `MAX_SOCKSPEREASYHANDLE` sockets and their `POLLIN|POLLOUT`
flags. This struct is initialized in `multi_getsock()` by calling the
various `getsocks()` implementations based on transfer state, as before.
After protocol handlers/transfer loop have set the sockets and flags
they want, the `easy_pollset` is *always* passed to the filters. Filters
"higher" in the chain are called first, starting at the first
not-yet-connection one. Each filter may add sockets and/or change
flags. When all flags are removed, the socket itself is removed from the
pollset.
Example:
* transfer wants to send, adds POLLOUT
* http/2 filter has a flow control block, removes POLLOUT and adds
POLLIN (it is waiting on a WINDOW_UPDATE from the server)
* TLS filter is connected and changes nothing
* h2-proxy filter also has a flow control block on its tunnel stream,
removes POLLOUT and adds POLLIN also.
* socket filter is connected and changes nothing
* The resulting pollset is then mixed together with all other transfers
and their pollsets, just as before.
Use of `SEND_HOLD` is no longer necessary in the filters.
All filters are adapted for the changed method. The handling in
`multi.c` has been adjusted, but its state handling the the protocol
handlers' `getsocks` method are untouched.
The most affected filters are http/2, ngtcp2, quiche and h2-proxy. TLS
filters needed to be adjusted for the connecting handshake read/write
handling.
No noticeable difference in performance was detected in local scorecard
runs.
Closes#11833
Put the instructions to run tests right at the top of tests/README.md.
Give instructions to read the runtests.1 man page for information
about flags. Delete redundant copy of the flags documentation in the
README.
Add a mention in README.md of the important parallelism flag, to make
test runs go much faster.
Move documentation of output line format into the runtests.1 man page,
and update it with missing flags.
Fix the order of two flags in the man page.
Closes#12193
Python precheck/postcheck alternatives were included but commented out.
Since these are not used and perl is guaranteed to be available to run
the perl versions anyway, the Python ones are removed.
The checkcmd() and checktestcmd() functions would not have worked on
Windows due to hard-coding the UNIX PATH separator character and not
adding .exe file extension. This meant that tools like stunnel, valgrind
and nghttpx would not have been found and used on Windows, and
inspection of previous test runs show none of those being found in pure
Windows CI builds.
With this fixed, they can be used to detect the handle64.exe program
before attempting to use it. When handle64.exe was called
unconditionally without it existing, it caused perl to abort the test
run with the error
The running command stopped because the preference variable
"ErrorActionPreference" or common parameter is set to Stop:
sh: handle64.exe: command not found
Closes#12115
- Add additional checking for missing and too-short SOCKS5 handshake
messages.
Prior to this change the SOCKS5 test server did not check that all parts
of the handshake were received successfully. If those parts were missing
or too short then the server would access uninitialized memory.
This issue was discovered in CI job 'memory-sanitizer' test results.
Test 2055 was failing due to the SOCKS5 test server not running. It was
not running because either it crashed or memory sanitizer aborted it
during Test 728. Test 728 connects to the SOCKS5 test server on a
redirect but does not send any data on purpose. The test server was not
prepared for that.
Reported-by: Dan Fandrich
Fixes https://github.com/curl/curl/issues/12117
Closes https://github.com/curl/curl/pull/12118
This test would show an error message if the output was missing during
the log post-processing step, but the message was not captured by the
test harness and wasn't useful since the normal golden log file
comparison would the problem more clearly.
Prior to this change the state machine attempted to change the remote
resolve to a local resolve if the hostname was longer than 255
characters. Unfortunately that did not work as intended and caused a
security issue.
Bug: https://curl.se/docs/CVE-2023-38545.html
- make result print collected write data, unless
change in meta flags is detected
- will show same result even when data arrives via
several writecb invocations
Closes#12068
If a client disconnected and reconnected quickly, before the ftp server
had a chance to respond, the protocol message/ack (ping/pong) sequence
got out of sync, causing messages sent to the old client to be delivered
to the new. A disconnect must now be acknowledged and intermediate
requests thrown out until it is, which ensures that such synchronization
problems can't occur. This problem could affect ftp, pop3, imap and smtp
tests.
Fixes#12002Closes#12049
The test otherwise could do just about anything (except leak memory in
debug mode) and its bad behaviour wouldn't be detected. Now, check the
resulting cookie file to ensure the cookies are still there.
Closes#12041
msys2 builds actually hit the connect timeout in normal operation, so
lower the timeout from 5 minutes to 5 seconds to reduce test time.
Ref: #11328Closes#12036
ftpserver.pl correctly cleans up spawned server processes,
but forgets to wait for the shell used to spawn them.
This is barely noticeable during a normal testrun,
but causes process exhaustion and test failure
during a complete torture run of the FTP tests.
Fixes#12018Closes#12020
Use the test macros to automatically propagate some errors, and check
and log others while running the tests. This can help in debugging
exactly why a test has failed.
On an overloaded server, the default 1 second timeout can go by without
the test server having a chance to respond with the expected headers,
causing tests to fail. Increase the 1 second timeout to 99 seconds so
this failure mode is no longer a problem on test 1129. Some other tests
already set a high value, but make them consistently 99 seconds so if
something goes wrong the test is stalled for less time.
Ref: #11328
This was already done for automake builds but CMake builds were missed.
Test 1086 actually causes the test harness to crash with:
Warning: unable to close filehandle DWRITE properly: Broken pipe at C:/projects/curl/tests/ftpserver.pl line 527
Rather than fix it now, this change leaves test 1086 entirely skipped on
those builds that show this problem.
Follow-up to 589dca761
Ref: #11865
The threee tags `<name>`, `</name>` and `<command>` were frequently used
with a leading space that this removes. The reason this habbit is so
widespread in testcases is probably that they have been copy and pasted.
Hence, fixing them all now might curb this practice from now on.
Closes#12028
- refs #11982 where it was noted that paused transfers may
close successfully without delivering the complete data
- made sample poc into tests/http/client/h2-pausing.c and
added test_02_27 to reproduce
Closes#11989Fixes#11982
Reported-by: Harry Sintonen
It sometimes happens that a test hangs during a test run and never
returns. The test harness will wait indefinitely for the results and on
CI servers the CI job will eventually be killed after an hour or two.
At the end of a test run, if results haven't come in within a couple of
minutes, display the status of all test runners and what tests they're
running to help in debugging the problem.
This feature is really only kick in with parallel testing enabled, which
is fine because without parallel testing it's usually easy to tell what
test has hung.
Closes#11980
1. References to curl symbols are now checked that they indeed exist as
man pages. This for \f references as well as the names referenced in the
SEE ALSO section.
Allowlist curl.1 since it is not always built in builds
2. References to curl symbols that lack section now causes warning, since that
will prevent them from getting linked properly
3. Check for "bare" references to curl functions and warn, they should be
references
Closes#11963
- Enforce a single reference per .BR line
- Skip the quotes around the section number for example (3)
- Insist on trailing commas on all lines except the last
- Error on comma on the last SEE ALSO entry
- List the entries alpha-sorted, not enforced just recommended
Closes#11957
Delete checks and guards for standard C89 headers and assume these are
available: `stdio.h`, `string.h`, `time.h`, `setjmp.h`, `stdlib.h`,
`stddef.h`, `signal.h`.
Some of these we already used unconditionally, some others we only used
for feature checks.
Follow-up to 9c7165e96a#11918 (for `stdio.h` in CMake)
Closes#11940
Seen with llvm 17 on Windows x64.
```
.../curl/tests/server/rtspd.c:136:13: warning: no previous extern declaration for non-static variable 'logdir' [-Wmissing-variable-declarations]
136 | const char *logdir = "log";
| ^
.../curl/tests/server/rtspd.c:136:7: note: declare 'static' if the variable is not intended to be used outside of this translation unit
136 | const char *logdir = "log";
| ^
.../curl/tests/server/rtspd.c:137:6: warning: no previous extern declaration for non-static variable 'loglockfile' [-Wmissing-variable-declarations]
137 | char loglockfile[256];
| ^
.../curl/tests/server/rtspd.c:137:1: note: declare 'static' if the variable is not intended to be used outside of this translation unit
137 | char loglockfile[256];
| ^
.../curl/tests/server/fake_ntlm.c:43:13: warning: no previous extern declaration for non-static variable 'logdir' [-Wmissing-variable-declarations]
43 | const char *logdir = "log";
| ^
.../curl/tests/server/fake_ntlm.c:43:7: note: declare 'static' if the variable is not intended to be used outside of this translation unit
43 | const char *logdir = "log";
| ^
.../curl/src/tool_doswin.c:350:8: warning: possible misuse of comma operator here [-Wcomma]
350 | ++d, ++s;
| ^
.../curl/src/tool_doswin.c:350:5: note: cast expression to void to silence warning
350 | ++d, ++s;
| ^~~
| (void)( )
```
```
.../curl/tests/libtest/lib540.c:146:27: warning: result of comparison 'long' > 2147483647 is always false [-Wtautological-type-limit-compare]
146 | int itimeout = (L > (long)INT_MAX) ? INT_MAX : (int)L;
| ~ ^ ~~~~~~~~~~~~~
1 warning generated.
.../curl/tests/libtest/libntlmconnect.c:195:31: warning: result of comparison 'long' > 2147483647 is always false [-Wtautological-type-limit-compare]
195 | int itimeout = (timeout > (long)INT_MAX) ? INT_MAX : (int)timeout;
| ~~~~~~~ ^ ~~~~~~~~~~~~~
1 warning generated.
.../curl/tests/libtest/lib591.c:117:31: warning: result of comparison 'long' > 2147483647 is always false [-Wtautological-type-limit-compare]
117 | int itimeout = (timeout > (long)INT_MAX) ? INT_MAX : (int)timeout;
| ~~~~~~~ ^ ~~~~~~~~~~~~~
1 warning generated.
.../curl/tests/libtest/lib597.c:99:31: warning: result of comparison 'long' > 2147483647 is always false [-Wtautological-type-limit-compare]
99 | int itimeout = (timeout > (long)INT_MAX) ? INT_MAX : (int)timeout;
| ~~~~~~~ ^ ~~~~~~~~~~~~~
1 warning generated.
```
Seen on macOS Intel:
```
.../curl/tests/server/sws.c:440:64: warning: field precision should have type 'int', but argument has type 'size_t' (aka 'unsigned long') [-Wformat]
msnprintf(logbuf, sizeof(logbuf), "Got request: %s %.*s HTTP/%d.%d",
~~^~
1 warning generated.
```
Closes#11925
- ipfs://<cid>
- ipns://<cid>
This allows you tu use ipfs in curl like:
curl ipfs://<cid>
and
curl ipns://<cid>
For more information consult the readme at:
https://curl.se/docs/ipfs.htmlCloses#8805
Using the system's provided arpa/tftp.h and optimizing, GCC 12 detects
and reports a stringop-overread warning:
tftpd.c: In function ‘write_behind.isra’:
tftpd.c:485:12: warning: ‘write’ reading between 1 and 2147483647 bytes from a region of size 0 [-Wstringop-overread]
485 | return write(test->ofile, writebuf, count);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from tftpd.c:71:
/usr/include/arpa/tftp.h:58:30: note: source object ‘tu_data’ of size 0
58 | char tu_data[0]; /* data or error string */
| ^~~~~~~
This occurs because writebuf points to this field and the latter
cannot be considered as being of dynamic length because it is not
the last field in the structure. Thus it is bound to its declared
size.
This commit always uses curl's own version of tftp.h where the
target field is last in its structure, effectively avoiding the
warning.
As HAVE_ARPA_TFTP_H is not used anymore, cmake/configure checks for
arpa/tftp.h are removed.
Closes#11897
If uname -r returns something odd, perl could return an error code and
the test would be erroneously skipped. The qx// syntax avoid this.
Followup to 08f9b2148
These kernels only send a fraction of the requested amount of the first
large block, invalidating the assumptions of the test and causing it to
fail.
Assisted-by: Christian Weisgerber
Ref: https://curl.se/mail/lib-2023-09/0021.htmlCloses#11888
CI builds will now run these tests, but will ignore the results if they
fail. The relevant tests are ones that are sensitive to timing or
have edge conditions that make them more likely to fail on CI servers,
which are often heavily overloaded and slow.
This change only adds two additional tests to be ignored, since the
others already had the flaky keyword.
Closes#11865
Generate alphanumerical random strings.
Prior this change curl used to create random hex strings. This was
mostly okay, but having alphanumerical random strings is better: The
strings have more entropy in the same space.
The MIME multipart boundary used to be mere 64-bits of randomness due
to being 16 hex chars. With these changes the boundary is 22
alphanumerical chars, or little over 130 bits of randomness.
Closes#11838