... and have curl trim the end when it reaches the expected total amount
of bytes instead of over-sending.
Reported-by: JustAnotherArchivist on github
Closes#11223
libcurl assumes that a --continue-at resumption is done to continue an
upload using the read callback and neither --data nor --form use
that and thus won't do what the user wants. Whatever the user wants
with this strange combination.
Add test 426 to verify.
Reported-by: Smackd0wn on github
Fixes#11081Closes#11083
The leftmost "label" of the host name can now only match against single
'*'. Like the browsers have worked for a long time.
- extended unit test 1397 for this
- move some SOURCE variables from unit/Makefile.am to unit/Makefile.inc
Reported-by: Hiroki Kurosawa
Closes#11018
Otherwise, an HTTP test closely following this one with a tight time
constraint (e.g. 672) could fail because the test server stays sitting
with the wait command for a while.
- with `--proxy-http2` allow h2 ALPN negotiation to
forward proxies
- applies to http: requests against a https: proxy only,
as https: requests will auto-tunnel
- adding a HTTP/1 request parser in http1.c
- removed h2h3.c
- using new request parser in nghttp2 and all h3 backends
- adding test 2603 for request parser
- adding h2 proxy test cases to test_10_*
scorecard.py: request scoring accidentally always run curl
with '-v'. Removed that, expect double numbers.
labeller: added http1.* and h2-proxy sources to detection
Closes#10967
- Use an absolute path for the -L option since the module isn't in the
perl path
- Create the needed test file in a <file> section; <precheck> isn't
intended for this
- Fix the test number in the file name, which was wrong
Follow-up to f754990a
Ref: #10818Fixes#10889Closes#10917
Output specific components from the used URL. The following variables
are added for this purpose:
url.scheme, url.user, url.password, url.options, url.host, url.port,
url.path, url.query, url.fragment, url.zoneid
Add the following for outputting parts of the "effective URL":
urle.scheme, urle.user, urle.password, urle.options, urle.host, urle.port,
urle.path, urle.query, urle.fragment, urle.zoneid
Added test 423 and 424 to verify.
Closes#10853
These files are generated by the test servers and must therefore be
found in the log directory to make them available to only those servers
once multiple test runners are executing in parallel. They must also not
be deleted with the log files, so they are stored in the pidfile
directory.
Ref: #10818Closes#10875
Otherwise, it might find the binary in .libs which can cause it to use
the system libcurl which can fail. This error is only visible by
noticing that the test is skipped.
Follow-up to e4dfe6fc
Ref: #10651
- use bufq for send/receive of network data
- usd bufq for send/receive of stream data
- use HTTP/2 flow control with no-auto updates to control the
amount of data we are buffering for a stream
HTTP/2 stream window set to 128K after local tests, defined
code constant for now
- elminiating PAUSEing nghttp2 processing when receiving data
since a stream can now take in all DATA nghttp2 forwards
Improved scorecard and adjuste http2 stream window sizes
- scorecard improved output formatting and options default
- scorecard now also benchmarks small requests / second
Closes#10771
RFC 7686 states that:
> Applications that do not implement the Tor
> protocol SHOULD generate an error upon the use of .onion and
> SHOULD NOT perform a DNS lookup.
Let's do that.
https://www.rfc-editor.org/rfc/rfc7686#section-2
Add test 1471 and 1472 to verify
Fixes#543Closes#10705
Some IP cameras send malformed RTSP interleaved frames sometimes, which
can cause curl_easy_perform return 1 (CURLE_UNSUPPORTED_PROTOCOL). This
change attempts to skip clearly incorrect RTSP interleaving frame data.
Closes#10808
Adding `bufq`:
- at init() time configured to hold up to `n` chunks of `m` bytes each.
- various methods for reading from and writing to it.
- `peek` support to get access to buffered data without copy
- `pass` support to allow buffer flushing on write if it becomes full
- use case: IO buffers for dynamic reads and writes that do not blow up
- distinct from `dynbuf` in that:
- it maintains a read position
- writes on a full bufq return CURLE_AGAIN instead of nuking itself
- Init options:
- SOFT_LIMIT: allow writes into a full bufq
- NO_SPARES: free empty chunks right away
- a `bufc_pool` that can keep a number of spare chunks to
be shared between different `bufq` instances
Adding `dynhds`:
- a straightforward list of name+value pairs as used for HTTP headers
- headers can be appended dynamically
- headers can be removed again
- headers can be replaced
- headers can be looked up
- http/1.1 formatting into a `dynbuf`
- configured at init() with limits on header counts and total string
sizes
- use case: pass a HTTP request or response around without being version
specific
- express a HTTP request without a curl easy handle (used in h2 proxy
tunnels)
- future extension possibilities:
- conversions of `dynhds` to nghttp2/nghttp3 name+value arrays
Closes#10720
all s3 requests default to UNSIGNED-PAYLOAD and add the required
x-amz-content-sha256 header. this allows CURLAUTH_AWS_SIGV4 to correctly
sign s3 requests to amazon with no additional configuration
Signed-off-by: Casey Bodley <cbodley@redhat.com>
Closes#9995
This is already how curl is documented to behave in Everything curl, but
in actuality only short POSTs skip this. This should knock 30 seconds
off a full run of the test suite since the 100-continue timeout will no
longer be hit.
Closes#10740
It results in error "NSS error -5985 (PR_ADDRESS_NOT_SUPPORTED_ERROR)"
Disabled test 1470 for NSS builds and documented the restriction.
Reported-by: Dan Fandrich
Fixes#10723Closes#10734
When returned from the CURLOPT_SOCKOPTFUNCTION, like when we have a
custom socket connected in the app, passed in to libcurl.
Verifies the fix in #10648Closes#10651