Because automake used to delete depdirs at once (.deps) and there was an issue
with portability, curl's XC_AMEND_DISTCLEAN greps the Makefiles in an attempt
to build a list of all depfiles and delete them individually instead.
Since commit 08849db866b44510f6b8fd49e313c91a43a3dfd3, automake switched from
deleting directories to individual files. curl's custom logic now finds a lot
more results with the grep (the filtering of these results isn't great), which
causes a massive bloating of the Makefile in the order of O(n^2).
Also remove now-unused XC_AMEND_DISTCLEAN macro group
References: https://github.com/curl/curl/issues/9843
References: https://debbugs.gnu.org/cgi/bugreport.cgi?bug=59288
Reported-by: Ilmari Lauhakangas
Fixes#9843Closes#10661
When returned from the CURLOPT_SOCKOPTFUNCTION, like when we have a
custom socket connected in the app, passed in to libcurl.
Verifies the fix in #10648Closes#10651
- httpd is only one server we test with
- the suite coveres the HTTP protocol in general where
the default test cases need a more beefy environment
Closes#10654
- refs #10646 where reuse was attempted on closed connections in the
cache, leading to an exhaustion of retries on a transfer
- the mistake was that poll events like POLLHUP, POLLERR, etc
were regarded as "not dead".
- change cf-socket filter check to regard such events as inidication
of corpsiness.
- vtls filter checks: fixed interpretation of backend check result
when inconclusive to interrogate status further down the filter
chain.
Reported-by: SendSonS on github
Fixes#10646Closes#10652
In 7.87.0, if callback method for CURLOPT_SOCKOPTFUNCTION returns
CURL_SOCKOPT_ALREADY_CONNECTED then curl library used to return
CURLE_OK. n 7.88.0, now even if callback returns
CURL_SOCKOPT_ALREADY_CONNECTED, curl library still tries to connect to
socket by invoking method do_connect().
This is regression caused by commit
https://github.com/curl/curl/commit/71b7e0161032927cdfb
Fix: Check if we are already connected and return CURLE_OK.
Fixes#10626Closes#10648
- Change readwrite_upload() to call win_update_buffer_size() no more
than once a second to update SO_SNDBUF (send buffer limit).
Prior to this change during an upload readwrite_upload() could call
win_update_buffer_size() anywhere from hundreds of times per second to
an extreme test case of 100k per second (which is likely due to a bug,
see #10618). In the latter case WPA profiler showed
win_update_buffer_size was the highest capture count in
readwrite_upload. In any case the calls were excessive and unnecessary.
Ref: https://github.com/curl/curl/pull/2762
Closes https://github.com/curl/curl/pull/10611
- refs #10634 where errors in the HTTP/2 framing layer are observed.
- the bug was that on connection reuse, the code attempted to switch
in yet another layer of HTTP/2 handling instead of detecting that
this was already in place.
- added pytest testcase reproducing the issue.
Reported-by: rwmjones on github
Fixes#10634Closes#10643
- do not try to determine the remote address of a listen socket. There
is none.
- Update remote address of an accepted socket by getpeername() if
available.
Reported-by: Harry Sintonen
Fixes#10622Closes#10642
- when h2/h3 eyeballing was involved, unix domain socket
configurations were not honoured
- configuring --unix-socket will disable HTTP/3 as candidate for eyeballing
- combinatino of --unix-socket and --http3-only will fail during initialisation
- adding pytest test_11 to reproduce
Reported-by: Jelle van der Waa
Fixes#10633Closes#10641
Here somehow you need to put more than one URL in these examples, else
they will make no sense, as --rate only affects the second and beyond
URLs. The first URL will always finish the same time no matter what
--rate is given.
Closes#10638
Some IDN sequences are converted into "" (nothing), which can make this
function end up with a zero length host name and we cannot consider that
a valid host to continue with.
Reported-by: Maciej Domanski
Closes#10617
This reverts commit e0db842b2a.
This tool seems very restricted in how often it might be used by a
project and thus very quickly start to report fails simply because it
refuses to run when "there are more runs than allowed".
Closes#10613