`Curl_pgrsSetUploadCounter` should be a passed a total count, not an
increment.
This changes the failing diff for test 579 with hyper from this:
```
Progress callback called with UL 0 out of 0[LF]
-Progress callback called with UL 8 out of 0[LF]
-Progress callback called with UL 16 out of 0[LF]
-Progress callback called with UL 26 out of 0[LF]
-Progress callback called with UL 61 out of 0[LF]
-Progress callback called with UL 66 out of 0[LF]
+Progress callback called with UL 29 out of 0[LF]
```
to this:
```
Progress callback called with UL 0 out of 0[LF]
-Progress callback called with UL 8 out of 0[LF]
-Progress callback called with UL 16 out of 0[LF]
-Progress callback called with UL 26 out of 0[LF]
-Progress callback called with UL 61 out of 0[LF]
-Progress callback called with UL 66 out of 0[LF]
+Progress callback called with UL 40 out of 0[LF]
```
Presumably a step in the right direction.
Closes#11780
Some of these changes come from comparing `Curl_http` and
`start_CONNECT`, which are similar, and adding things to them that are
present in one and missing in another.
The most important changes:
- In `start_CONNECT`, add a missing `hyper_clientconn_free` call on the
happy path.
- In `start_CONNECT`, add a missing `hyper_request_free` on the error
path.
- In `bodysend`, add a missing `hyper_body_free` on an early-exit path.
- In `bodysend`, remove an unnecessary `hyper_body_free` on a different
error path that would cause a double-free.
https://docs.rs/hyper/latest/hyper/ffi/fn.hyper_request_set_body.html
says of `hyper_request_set_body`: "This takes ownership of the
hyper_body *, you must not use it or free it after setting it on the
request." This is true even if `hyper_request_set_body` returns an
error; I confirmed this by looking at the hyper source code.
Other changes are minor but make things slightly nicer.
Closes#11745
There is a `hyper_clientconn_free` call on the happy path, but not one
on the error path. This commit adds one.
Fixes the second memory leak reported by Valgrind in #10803.
Fixes#10803Closes#11729
A request created with `hyper_request_new` must be consumed by either
`hyper_clientconn_send` or `hyper_request_free`.
This is not terrifically clear from the hyper docs --
`hyper_request_free` is documented only with "Free an HTTP request if
not going to send it on a client" -- but a perusal of the hyper code
confirms it.
This commit adds a `hyper_request_free` to the `error:` path in
`Curl_http` so that the request is consumed when an error occurs after
the request is created but before it is sent.
Fixes the first memory leak reported by Valgrind in #10803.
Closes#11729
To avoid abuse. The limit is set to 300 KB for the accumulated size of
all received HTTP headers for a single response. Incomplete research
suggests that Chrome uses a 256-300 KB limit, while Firefox allows up to
1MB.
Closes#11582
- refs #11203 where hyper was reported as being slow
- fixes hyper_executor_poll to loop until it is out of
tasks as advised by @seanmonstar in https://github.com/hyperium/hyper/issues/3237
- added a fix in hyper io handling for detecting EAGAIN
- added some debug logs to see IO results
- pytest http/1.1 test cases pass
- pytest h2 test cases fail on connection reuse. HTTP/2
connection reuse does not seem to work. Hyper submits
a request on a reused connection, curl's IO works and
thereafter hyper declares `Hyper: [1] operation was canceled: connection closed`
on stderr without any error being logged before.
Fixes#11203
Reported-by: Gisle Vanem
Advised-by: Sean McArthur
Closes#11344
Out of 415 labels throughout the code base, 86 of those labels were
not at the start of the line. Which means labels always at the start of
the line is the favoured style overall with 329 instances.
Out of the 86 labels not at the start of the line:
* 75 were indented with the same indentation level of the following line
* 8 were indented with exactly one space
* 2 were indented with one fewer indentation level then the following
line
* 1 was indented with the indentation level of the following line minus
three space (probably unintentional)
Co-Authored-By: Viktor Szakats
Closes#11134
- they are mostly pointless in all major jurisdictions
- many big corporations and projects already don't use them
- saves us from pointless churn
- git keeps history for us
- the year range is kept in COPYING
checksrc is updated to allow non-year using copyright statements
Closes#10205
- Replace `Github` with `GitHub`.
- Replace `windows` with `Windows`
- Replace `advice` with `advise` where a verb is used.
- A few fixes on removing repeated words.
- Replace `a HTTP` with `an HTTP`
Closes#9802
Next Protocol Negotiation is a TLS extension that was created and used
for agreeing to use the SPDY protocol (the precursor to HTTP/2) for
HTTPS. In the early days of HTTP/2, before the spec was finalized and
shipped, the protocol could be enabled using this extension with some
servers.
curl supports the NPN extension with some TLS backends since then, with
a command line option `--npn` and in libcurl with
`CURLOPT_SSL_ENABLE_NPN`.
HTTP/2 proper is made to use the ALPN (Application-Layer Protocol
Negotiation) extension and the NPN extension has no purposes
anymore. The HTTP/2 spec was published in May 2015.
Today, use of NPN in the wild should be extremely rare and most likely
totally extinct. Chrome removed NPN support in Chrome 51, shipped in
June 2016. Removed in Firefox 53, April 2017.
Closes#9307
As virtually no called checked the return code, and those that did
wrongly treated it as a CURLcode. Detected by the icc compiler warning:
enumerated type mixed with another type
Closes#9179
Add licensing and copyright information for all files in this repository. This
either happens in the file itself as a comment header or in the file
`.reuse/dep5`.
This commit also adds a Github workflow to check pull requests and adapts
copyright.pl to the changes.
Closes#8869
Hyper now has the ability to preserve header order. This commit adds a
few lines setting the connection options for this feature.
Related to issue #8617Closes#8707
- Make content length (ie download size) accessible to the user in the
header callback, but only after all headers have been processed (ie
only in the final call to the header callback).
Background:
For a long time the content length could be retrieved in the header
callback via CURLINFO_CONTENT_LENGTH_DOWNLOAD_T as soon as it was parsed
by curl.
Changes were made in 8a16e54 (precedes 7.79.0) to ignore content length
if any transfer encoding is used. A side effect of that was that
content length was not set by libcurl until after the header callback
was called the final time, because until all headers are processed it
cannot be determined if content length is valid.
This change keeps the same intention --all headers must be processed--
but now the content length is available before the final call to the
header function that indicates all headers have been processed (ie
a blank header).
Bug: https://github.com/curl/curl/commit/8a16e54#r57374914
Reported-by: sergio-nsk@users.noreply.github.com
Co-authored-by: Daniel Stenberg
Fixes https://github.com/curl/curl/issues/7804
Closes https://github.com/curl/curl/pull/7803
Pass on better return codes when errors occur within Curl_http instead
of insisting that CURLE_OUT_OF_MEMORY is the only possible one.
Pointed-out-by: Jay Satiro
Closes#7851
1. it's superfluous
2. it didn't work identically to the Curl_hyper_stream one which could
cause problems like #7486
Pointed-out-by: David Cook
Closes#7499
- the data needs to be "line-based" anyway since it's also passed to the
debug callback/application
- it makes infof() work like failf() and consistency is good
- there's an assert that triggers on newlines in the format string
- Also removes a few instances of "..."
- Removes the code that would append "..." to the end of the data *iff*
it was truncated in infof()
Closes#7357