Instead of connclose()ing, since when HTTP/2 is used it doesn't need to
close the connection as stopping the current transfer is enough.
Reported-by: Evangelos Foutras
Closes#8665
They are not allowed by the protocol and allowing them risk that curl
misbehaves somewhere where C functions are used but won't work on the
full contents. Further, they are not supported by hyper and they cause
problems for the new coming headers API work.
Updated test 262 to verify and enabled it for hyper as well
Closes#8601
Also add STRCONST, a macro that returns a string literal and it's length
for functions that take "string,len"
Removes unnecesary calls to strlen().
Closes#8391
The only h2 psuedo header that wasn't previously possible to change by a
user. This change also makes it impossible to send a HTTP/1 header that
starts with a colon, which I don't think anyone does anyway.
The other pseudo headers are possible to change indirectly by doing the
rightly crafted request.
Reported-by: siddharthchhabrap on github
Fixes#8381Closes#8393
The tools.ietf.org domain has been deprecated a while now, with the
links being redirected to datatracker.ietf.org.
Rather than make people eat that redirect time, this change switches the
URL to a more canonical source.
Closes#8317
This is done by having native code do the haproxy header output before
hyper issues its request. The little downside with this approach is that
we need the entire Curl_buffer_send() function built, which is otherwise
not used for hyper builds.
If hyper ends up getting native support for the haproxy protocols we can
backpedal on this.
Enables test 1455 and 1456
Closes#8034
... which then also includes negative ones as test 1430 uses.
This makes native + hyper backend act identically on this and therefore
test 1430 can now be enabled when building with hyper. Adjust test 1431
as well.
Closes#7909
- Make content length (ie download size) accessible to the user in the
header callback, but only after all headers have been processed (ie
only in the final call to the header callback).
Background:
For a long time the content length could be retrieved in the header
callback via CURLINFO_CONTENT_LENGTH_DOWNLOAD_T as soon as it was parsed
by curl.
Changes were made in 8a16e54 (precedes 7.79.0) to ignore content length
if any transfer encoding is used. A side effect of that was that
content length was not set by libcurl until after the header callback
was called the final time, because until all headers are processed it
cannot be determined if content length is valid.
This change keeps the same intention --all headers must be processed--
but now the content length is available before the final call to the
header function that indicates all headers have been processed (ie
a blank header).
Bug: https://github.com/curl/curl/commit/8a16e54#r57374914
Reported-by: sergio-nsk@users.noreply.github.com
Co-authored-by: Daniel Stenberg
Fixes https://github.com/curl/curl/issues/7804
Closes https://github.com/curl/curl/pull/7803
When the "reason phrase" in the HTTP status line starts with a digit,
that was treated as the forth response code digit and curl would claim
the response to be non-compliant.
Added test 1466 to verify this case.
Regression brought by 5dc594e44f
Reported-by: Glenn de boer
Fixes#7738Closes#7739
Make the built-in HTTP parser behave similar to hyper and reject any
HTTP response using more than 3 digits for the response code.
Updated test 1432 accordingly.
Enabled test 1432 in the hyper builds.
Closes#7641
warning C4189: 'netrc_user_changed': local variable is initialized but
not referenced
warning C4189: 'netrc_passwd_changed': local variable is initialized but
not referenced
Closes#7423
- the data needs to be "line-based" anyway since it's also passed to the
debug callback/application
- it makes infof() work like failf() and consistency is good
- there's an assert that triggers on newlines in the format string
- Also removes a few instances of "..."
- Removes the code that would append "..." to the end of the data *iff*
it was truncated in infof()
Closes#7357
- Don't set the size of the piece of data to send to the rate limit if
that limit is larger than the buffer size that will hold the piece.
Prior to this change if CURLOPT_MAX_SEND_SPEED_LARGE
(curl tool: --limit-rate) was set then it was possible that a temporary
buffer used for uploading could be written to out of bounds. A likely
scenario for this would be a non-trivial amount of post data combined
with a rate limit larger than CURLOPT_UPLOAD_BUFFERSIZE (default 64k).
The bug was introduced in 24e469f which is in releases since 7.76.0.
perl -e "print '0' x 200000" > tmp
curl --limit-rate 128k -d @tmp httpbin.org/post
Reported-by: Richard Marion
Fixes https://github.com/curl/curl/issues/7308
Closes https://github.com/curl/curl/pull/7315
Introducing a 'isproxy' argument to the connect function so that it
knows wether to store the time stamp or not.
Reported-by: Yongkang Huang
Fixes#7274Closes#7274
The libssh2 backend has SSH session associated with the connection but
the callback context is the easy handle, so when a connection gets
attached to a transfer, the protocol handler now allows for a custom
function to get used to set things up correctly.
Reported-by: Michael O'Farrell
Fixes#6898Closes#7078
Assumed to be a minor coding style improvement with no behavior change.
A modern compiler is expected to have the calculation optimized during
compilation. It may be deemed okay even if that's not the case, since
the added overhead is considered very low.
Closes#7032
Previously this logic would cap the send to CURL_MAX_WRITE_SIZE bytes,
but for the situations where a larger upload buffer has been set, this
function can benefit from sending more bytes. With default size used,
this does the same as before.
Also changed the storage of the size to an 'unsigned int' as it is not
allowed to be set larger than 2M.
Also added cautions to the man pages about changing buffer sizes in
run-time.
Closes#7022
A reused transfer handle could otherwise reuse the previous leftover
buffer and havoc would ensue.
Reported-by: sergio-nsk on github
Fixes#7018Closes#7021
By making sure never to send off more than the allowed number of bytes
per second the speed limit logic is given more room to actually work.
Reported-by: Fabian Keil
Bug: https://curl.se/mail/lib-2021-03/0042.htmlCloses#6797
Both were used for the same purposes and there was no logical separation
between them. Combined, this also saves 16 bytes in less holes in my
test build.
Closes#6798
To make sure the Host: header and the URL provide the same authority
portion when sent to the proxy, strip the default port number from the
URL if one was provided.
Reported-by: Michael Brown
Fixes#6769Closes#6778
When asked to resume a download, libcurl will convert that to HTTP logic
and if then the entire file is already transferred it will result in a
416 response from the HTTP server. With CURLOPT_FAILONERRROR set in that
scenario, it should *not* lead to an error return.
Updated test 1156, added test 1273
Reported-by: Jonathan Watt
Fixes#6740Closes#6753