- add `Curl_hash_offt` as hashmap between a `curl_off_t` and
an object. Use this in h2+h3 connection filters to associate
`data->id` with the internal stream state.
- changed implementations of all affected connection filters
- removed `h2_ctx*` and `h3_ctx*` from `struct HTTP` and thus
the easy handle
- solves the problem of attaching "foreign protocol" easy handles
during connection shutdown
Test 1616 verifies the new hash functions.
Closes#13204
Before this patch `lib/curl_setup.h` defined these two macros right
next to each other, then the source code used them interchangeably.
After this patch, `USE_HTTP3` guards all HTTP/3 / QUIC features.
(Like `USE_HTTP2` does for HTTP/2.) `ENABLE_QUIC` is no longer used.
This patch doesn't change the way HTTP/3 is enabled via autotools
or CMake. Builders who enabled HTTP/3 manually by defining both of
these macros via `CPPFLAGS` can now delete `-DENABLE_QUIC`.
Closes#13352
Saving some cpu cycles in http response header processing:
- pass the length of the header line along
- use string constant sizeof() instead of strlen()
- check line length if prefix is possible
- switch on first header char to limit checks
Closes#13143
Move all handling of HTTP's `Expect: 100-continue` feature into a client
reader. Add sending flag `KEEP_SEND_TIMED` that triggers transfer
sending on general events like a timer.
HTTP installs a `CURL_CR_PROTOCOL` reader when announcing `Expect:
100-continue`. That reader works as follows:
- on first invocation, records time, starts the `EXPIRE_100_TIMEOUT`
timer, disables `KEEP_SEND`, enables `KEEP_SEND_TIMER` and returns 0,
eos=FALSE like a paused upload.
- on subsequent invocation it checks if the timer has expired. If so, it
enables `KEEP_SEND` and switches to passing through reads to the
underlying readers.
Transfer handling's `readwrite()` will be invoked when a timer expires
(like `EXPIRE_100_TIMEOUT`) or when data from the server arrives. Seeing
`KEEP_SEND_TIMER`, it will try to upload more data, which triggers
reading from the client readers again. Which then may lead to a new
pausing or cause the upload to start.
Flags and timestamps connected to this have been moved from
`SingleRequest` into the reader's context.
Closes#13110
A transfer may do several `SingleRequest`s for its success. This happens
regularly for authentication, follows and retries on failed connections.
The "readwrite()" calls and functions connected to those carried a `bool
*done` parameter to indicate that the current `SingleRequest` is over.
This may happen before `upload_done` or `download_done` bits of
`SingleRequest` are set.
The problem with that is now `write_resp()` protocol handlers are
invoked in places where the `bool *done` cannot be passed up to the
caller. Instead of being a bool in the call chain, it needs to become a
member of `SingleRequest`, reflecting its state.
This removes the `bool *done` parameter and adds the `done` bit to
`SingleRequest` instead. It adds `Curl_req_soft_reset()` for using a
`SingleRequest` in a follow up, clearing `done` and other
flags/counters.
Closes#13096
- update client reader documentation
- client reader, add rewind capabilities
- tell creader to rewind on next start
- Curl_client_reset() will keep reader for future rewind if requested
- add Curl_client_cleanup() for freeing all resources independent of
rewinds
- add Curl_client_start() to trigger rewinds
- move rewind code from multi.c to sendf.c and make part of
"cr-in"'s implementation
- http, move the "resume_from" handling into the client readers
- the setup of a HTTP request is reshuffled to follow:
* determine method, target, auth negotiation
* install the client reader(s) for the request, including crlf
conversions and "chunked" encoding
* apply ranges to client reader
* concat request headers, upgrades, cookies, etc.
* complete request by determining Content-Length of installed
readers in combination with method
* send
- add methods for client readers to
* return the overall length they will generate (or -1 when unknown)
* return the amount of data on the CLIENT level, so that
expect-100 can decide if it want to apply itself
* set a "resume_from" offset or fail if unsupported
- struct HTTP has become largely empty now
- rename `Client_reader_*` to `Curl_creader_*`
Closes#13026
- Move all the "upload_done" handling to request.c
- add possibility to abort sending of a request
- add `Curl_req_done_sending()` for checks
- transfer.c: readwrite_upload() now clean
- removing data->state.ulbuf and data->req.upload_fromhere
- as well as data->req.upload_present
- set data->req.upload_done on having read all from
the client and completely flushed the send buffer
- tftp, remove setting of data->req.upload_fromhere
- serves no purpose as `upload_present` is not set
and the data itself is directly `sendto()` anyway
- smtp, make upload EOB conversion a client reader
- xfer_ulbuf addition
- add xfer_ulbuf for borrowing, similar to xfer_buf
- use in file upload
- use in c-hyper body sending
- h1-proxy, remove init of data->state.uilbuf that is never used
- smb, add own send_buf instead of using data->state.ulbuf
Closes#13010
- replace `Curl_read()`, `Curl_write()` and `Curl_nwrite()` to
clarify when and at what level they operate
- send/recv of transfer related data is now done via
`Curl_xfer_send()/Curl_xfer_recv()` which no longer has
socket/socketindex as parameter. It decides on the transfer
setup of `conn->sockfd` and `conn->writesockfd` on which
connection filter chain to operate.
- send/recv on a specific connection filter chain is done via
`Curl_conn_send()/Curl_conn_recv()` which get the socket index
as parameter.
- rename `Curl_setup_transfer()` to `Curl_xfer_setup()` for
naming consistency
- clarify that the special CURLE_AGAIN hangling to return
`CURLE_OK` with length 0 only applies to `Curl_xfer_send()`
and CURLE_AGAIN is returned by all other send() variants.
- fix a bug in websocket `curl_ws_recv()` that mixed up data
when it arrived in more than a single chunk (to be made
into a sperate PR, also)
Added as documented [in
CLIENT-READER.md](5b1f31dfba/docs/CLIENT-READERS.md).
- old `Curl_buffer_send()` completely replaced by new `Curl_req_send()`
- old `Curl_fillreadbuffer()` replaced with `Curl_client_read()`
- HTTP chunked uploads are now formatted in a client reader added when
needed.
- FTP line-end conversions are done in a client reader added when
needed.
- when sending requests headers, remaining buffer space is filled with
body data for sending in "one go". This is independent of the request
body size. Resolves#12938 as now small and large requests have the
same code path.
Changes done to test cases:
- test513: now fails before sending request headers as this initial
"client read" triggers the setup fault. Behaves now the same as in
hyper build
- test547, test555, test1620: fix the length check in the lib code to
only fail for reads *smaller* than expected. This was a bug in the
test code that never triggered in the old implementation.
Closes#12969
- replace `Curl_read()`, `Curl_write()` and `Curl_nwrite()` to
clarify when and at what level they operate
- send/recv of transfer related data is now done via
`Curl_xfer_send()/Curl_xfer_recv()` which no longer has
socket/socketindex as parameter. It decides on the transfer
setup of `conn->sockfd` and `conn->writesockfd` on which
connection filter chain to operate.
- send/recv on a specific connection filter chain is done via
`Curl_conn_send()/Curl_conn_recv()` which get the socket index
as parameter.
- rename `Curl_setup_transfer()` to `Curl_xfer_setup()` for
naming consistency
- clarify that the special CURLE_AGAIN hangling to return
`CURLE_OK` with length 0 only applies to `Curl_xfer_send()`
and CURLE_AGAIN is returned by all other send() variants.
- fix a bug in websocket `curl_ws_recv()` that mixed up data
when it arrived in more than a single chunk
The method for sending not just raw bytes, but bytes that are either
"headers" or "body". The send abstraction stack, to to bottom, now is:
* `Curl_req_send()`: has parameter to indicate amount of header bytes,
buffers all data.
* `Curl_xfer_send()`: knows on which socket index to send, returns
amount of bytes sent.
* `Curl_conn_send()`: called with socket index, returns amount of bytes
sent.
In addition there is `Curl_req_flush()` for writing out all buffered
bytes.
`Curl_req_send()` is active for requests without body,
`Curl_buffer_send()` still being used for others. This is because the
special quirks need to be addressed in future parts:
* `expect-100` handling
* `Curl_fillreadbuffer()` needs to add directly to the new
`data->req.sendbuf`
* special body handlings, like `chunked` encodings and line end
conversions will be moved into something like a Client Reader.
In functions of the pattern `CURLcode xxx_send(..., ssize_t *written)`,
replace the `ssize_t` with a `size_t`. It makes no sense to allow for negative
values as the returned `CURLcode` already specifies error conditions. This
allows easier handling of lengths without casting.
Closes#12964
This clarifies the handling of server responses by folding the code for
the complicated protocols into their protocol handlers. This concerns
mainly HTTP and its bastard sibling RTSP.
The terms "read" and "write" are often used without clear context if
they refer to the connect or the client/application side of a
transfer. This PR uses "read/write" for operations on the client side
and "send/receive" for the connection, e.g. server side. If this is
considered useful, we can revisit renaming of further methods in another
PR.
Curl's protocol handler `readwrite()` method been changed:
```diff
- CURLcode (*readwrite)(struct Curl_easy *data, struct connectdata *conn,
- const char *buf, size_t blen,
- size_t *pconsumed, bool *readmore);
+ CURLcode (*write_resp)(struct Curl_easy *data, const char *buf, size_t blen,
+ bool is_eos, bool *done);
```
The name was changed to clarify that this writes reponse data to the
client side. The parameter changes are:
* `conn` removed as it always operates on `data->conn`
* `pconsumed` removed as the method needs to handle all data on success
* `readmore` removed as no longer necessary
* `is_eos` as indicator that this is the last call for the transfer
response (end-of-stream).
* `done` TRUE on return iff the transfer response is to be treated as
finished
This change affects many files only because of updated comments in
handlers that provide no implementation. The real change is that the
HTTP protocol handlers now provide an implementation.
The HTTP protocol handlers `write_resp()` implementation will get passed
**all** raw data of a server response for the transfer. The HTTP/1.x
formatted status and headers, as well as the undecoded response
body. `Curl_http_write_resp_hds()` is used internally to parse the
response headers and pass them on. This method is public as the RTSP
protocol handler also uses it.
HTTP/1.1 "chunked" transport encoding is now part of the general
*content encoding* writer stack, just like other encodings. A new flag
`CLIENTWRITE_EOS` was added for the last client write. This allows
writers to verify that they are in a valid end state. The chunked
decoder will check if it indeed has seen the last chunk.
The general response handling in `transfer.c:466` happens in function
`readwrite_data()`. This mainly operates now like:
```
static CURLcode readwrite_data(data, ...)
{
do {
Curl_xfer_recv_resp(data, buf)
...
Curl_xfer_write_resp(data, buf)
...
} while(interested);
...
}
```
All the response data handling is implemented in
`Curl_xfer_write_resp()`. It calls the protocol handler's `write_resp()`
implementation if available, or does the default behaviour.
All raw response data needs to pass through this function. Which also
means that anyone in possession of such data may call
`Curl_xfer_write_resp()`.
Closes#12480
- add `SingleRequest->download_done` as indicator that
all download bytes have been received
- remove `stop_reading` bool from readwrite functions
- move excess body handling into client download writer
Closes#12371
- changed header/chunk/handler->readwrite prototypes to accept `buf`,
`blen` and a `pconsumed` pointer. They now get the buffer to work on
and report back how many bytes they consumed
- eliminated `k->str` in SingleRequest
- improved excess data handling to properly calculate with any body data
left in the headerb buffer
- eliminated `k->badheader` enum to only be a bool
Closes#12283
- Increase the maximum request method name length from 11 to 23.
For HTTP/1.1 and earlier there's not a specific limit in libcurl for
method length except that it is limited by the initial HTTP request
limit (DYN_HTTP_REQUEST). Prior to fc2f1e54 HTTP/2 was treated the same
and there was no specific limit.
According to Internet Assigned Numbers Authority (IANA) the longest
registered method is UPDATEREDIRECTREF which is 17 characters.
Also there are unregistered methods used by some companies that are
longer than 11 characters.
The limit was originally added by 61f52a97 but not used until fc2f1e54.
Ref: https://www.iana.org/assignments/http-methods/http-methods.xhtml
Closes https://github.com/curl/curl/pull/12311
When the legacy CURLOPT_HTTPPOST option is used, it gets converted into
the modem mimpost struct at first use. This data is (now) kept for the
entire transfer and not only per single HTTP request. This re-enables
rewind in the beginning of the second request instead of in end of the
first, as brought by 1b39731.
The request struct is per-request data only.
Extend test 650 to verify.
Fixes#11680
Reported-by: yushicheng7788 on github
Closes#11682
To avoid abuse. The limit is set to 300 KB for the accumulated size of
all received HTTP headers for a single response. Incomplete research
suggests that Chrome uses a 256-300 KB limit, while Firefox allows up to
1MB.
Closes#11582
- state is fully kept at connection, since curl_ws_send() and
curl_ws_rec() have lifetime beyond usual transfers
- no more limit on frame sizes
Reported-by: simplerobot on github
Fixes#10962Closes#10999
- with `--proxy-http2` allow h2 ALPN negotiation to
forward proxies
- applies to http: requests against a https: proxy only,
as https: requests will auto-tunnel
- adding a HTTP/1 request parser in http1.c
- removed h2h3.c
- using new request parser in nghttp2 and all h3 backends
- adding test 2603 for request parser
- adding h2 proxy test cases to test_10_*
scorecard.py: request scoring accidentally always run curl
with '-v'. Removed that, expect double numbers.
labeller: added http1.* and h2-proxy sources to detection
Closes#10967
- remove NGHTTP2 members of `struct HTTP`
- add `void *h2_ctx` to `struct HTTP`
- add `void *h3_ctx` to `struct HTTP`
- separate h2/h3 pointers are needed for eyeballing
- manage local stream_ctx in http implementations
Closes#10877
- ngtcp2: using bufq for recv stream data
- internal stream_ctx instead of `struct HTTP` members
for quiche, ngtcp2 and msh3
- no more QUIC related members in `struct HTTP`
- experimental use of recvmmsg(), disabled by default
- testing on my old debian box shows no throughput improvements.
- leaving it in, but disabled, for future revisit
- vquic: common UDP receive code for ngtcp2 and quiche
- vquic: common UDP send code for ngtcp2 and quiche
- added pytest skips for known msh3 failures
- fix unit2601 to survive torture testing
- quiche: using latest `master` from quiche and enabling large download
tests, now that key change is supported
- fixing test_07_21 where retry handling of starting a stream
was faulty
- msh3: use bufq for recv buffering headers and data
- msh3: replace fprintf debug logging with LOG_CF where possible
- msh3: force QUIC expire timers on recv/send to have more than
1 request per second served
Closes#10772
- use bufq for send/receive of network data
- usd bufq for send/receive of stream data
- use HTTP/2 flow control with no-auto updates to control the
amount of data we are buffering for a stream
HTTP/2 stream window set to 128K after local tests, defined
code constant for now
- elminiating PAUSEing nghttp2 processing when receiving data
since a stream can now take in all DATA nghttp2 forwards
Improved scorecard and adjuste http2 stream window sizes
- scorecard improved output formatting and options default
- scorecard now also benchmarks small requests / second
Closes#10771
Adding `bufq`:
- at init() time configured to hold up to `n` chunks of `m` bytes each.
- various methods for reading from and writing to it.
- `peek` support to get access to buffered data without copy
- `pass` support to allow buffer flushing on write if it becomes full
- use case: IO buffers for dynamic reads and writes that do not blow up
- distinct from `dynbuf` in that:
- it maintains a read position
- writes on a full bufq return CURLE_AGAIN instead of nuking itself
- Init options:
- SOFT_LIMIT: allow writes into a full bufq
- NO_SPARES: free empty chunks right away
- a `bufc_pool` that can keep a number of spare chunks to
be shared between different `bufq` instances
Adding `dynhds`:
- a straightforward list of name+value pairs as used for HTTP headers
- headers can be appended dynamically
- headers can be removed again
- headers can be replaced
- headers can be looked up
- http/1.1 formatting into a `dynbuf`
- configured at init() with limits on header counts and total string
sizes
- use case: pass a HTTP request or response around without being version
specific
- express a HTTP request without a curl easy handle (used in h2 proxy
tunnels)
- future extension possibilities:
- conversions of `dynhds` to nghttp2/nghttp3 name+value arrays
Closes#10720
vquic stabilization
- udp send code shared between ngtcp2 and quiche
- quiche handling of data and events improved
ngtcp2 and pytest improvements
- fixes handling of "drain" situations, discovered in scorecard
tests with the Caddy server.
- improvements in handling transfers that have already data or
are already closed to make an early return on recv
pytest
- adding caddy tests when available
scorecard improvemnts.
- using correct caddy port
- allowing tests for only httpd or caddy
Closes#10451
New cfilter HTTP-CONNECT for h3/h2/http1.1 eyeballing.
- filter is installed when `--http3` in the tool is used (or
the equivalent CURLOPT_ done in the library)
- starts a QUIC/HTTP/3 connect right away. Should that not
succeed after 100ms (subject to change), a parallel attempt
is started for HTTP/2 and HTTP/1.1 via TCP
- both attempts are subject to IPv6/IPv4 eyeballing, same
as happens for other connections
- tie timeout to the ip-version HAPPY_EYEBALLS_TIMEOUT
- use a `soft` timeout at half the value. When the soft timeout
expires, the HTTPS-CONNECT filter checks if the QUIC filter
has received any data from the server. If not, it will start
the HTTP/2 attempt.
HTTP/3(ngtcp2) improvements.
- setting call_data in all cfilter calls similar to http/2 and vtls filters
for use in callback where no stream data is available.
- returning CURLE_PARTIAL_FILE for prematurely terminated transfers
- enabling pytest test_05 for h3
- shifting functionality to "connect" UDP sockets from ngtcp2
implementation into the udp socket cfilter. Because unconnected
UDP sockets are weird. For example they error when adding to a
pollset.
HTTP/3(quiche) improvements.
- fixed upload bug in quiche implementation, now passes 251 and pytest
- error codes on stream RESET
- improved debug logs
- handling of DRAIN during connect
- limiting pending event queue
HTTP/2 cfilter improvements.
- use LOG_CF macros for dynamic logging in debug build
- fix CURLcode on RST streams to be CURLE_PARTIAL_FILE
- enable pytest test_05 for h2
- fix upload pytests and improve parallel transfer performance.
GOAWAY handling for ngtcp2/quiche
- during connect, when the remote server refuses to accept new connections
and closes immediately (so the local conn goes into DRAIN phase), the
connection is torn down and a another attempt is made after a short grace
period.
This is the behaviour observed with nghttpx when we tell it to shut
down gracefully. Tested in pytest test_03_02.
TLS improvements
- ALPN selection for SSL/SSL-PROXY filters in one vtls set of functions, replaces
copy of logic in all tls backends.
- standardized the infof logging of offered ALPNs
- ALPN negotiated: have common function for all backends that sets alpn proprty
and connection related things based on the negotiated protocol (or lack thereof).
- new tests/tests-httpd/scorecard.py for testing h3/h2 protocol implementation.
Invoke:
python3 tests/tests-httpd/scorecard.py --help
for usage.
Improvements on gathering connect statistics and socket access.
- new CF_CTRL_CONN_REPORT_STATS cfilter control for having cfilters
report connection statistics. This is triggered when the connection
has completely connected.
- new void Curl_pgrsTimeWas(..) method to report a timer update with
a timestamp of when it happend. This allows for updating timers
"later", e.g. a connect statistic after full connectivity has been
reached.
- in case of HTTP eyeballing, the previous changes will update
statistics only from the filter chain that "won" the eyeballing.
- new cfilter query CF_QUERY_SOCKET for retrieving the socket used
by a filter chain.
Added methods Curl_conn_cf_get_socket() and Curl_conn_get_socket()
for convenient use of this query.
- Change VTLS backend to query their sub-filters for the socket when
checks during the handshake are made.
HTTP/3 documentation on how https eyeballing works.
TLS improvements
- ALPN selection for SSL/SSL-PROXY filters in one vtls set of functions, replaces
copy of logic in all tls backends.
- standardized the infof logging of offered ALPNs
- ALPN negotiated: have common function for all backends that sets alpn proprty
and connection related things based on the negotiated protocol (or lack thereof).
Scorecard with Caddy.
- configure can be run with `--with-test-caddy=path` to specify which caddy to use for testing
- tests/tests-httpd/scorecard.py now measures download speeds with caddy
pytest improvements
- adding Makfile to clean gen dir
- adding nghttpx rundir creation on start
- checking httpd version 2.4.55 for test_05 cases where it is needed. Skipping with message if too old.
- catch exception when checking for caddy existance on system.
Closes#10349
- they are mostly pointless in all major jurisdictions
- many big corporations and projects already don't use them
- saves us from pointless churn
- git keeps history for us
- the year range is kept in COPYING
checksrc is updated to allow non-year using copyright statements
Closes#10205
stdint.h was only included in http.h when ENABLE_QUIC was defined, but
symbols from stdint.h are also used when USE_NGHTTP2 is defined. This
causes build errors when USE_NGHTTP2 is defined but ENABLE_QUIC is not.
Closes#10185
Refactoring of connection setup and happy eyeballing. Move
nghttp2. ngtcp2, quiche and msh3 into connection filters.
- eyeballing cfilter that uses sub-filters for performing parallel connects
- socket cfilter for all transport types, including QUIC
- QUIC implementations in cfilter, can now participate in eyeballing
- connection setup is more dynamic in order to adapt to what filter did
really connect. Relevant to see if a SSL filter needs to be added or
if SSL has already been provided
- HTTP/3 test cases similar to HTTP/2
- multiuse of parallel transfers for HTTP/3, tested for ngtcp2 and quiche
- Fix for data attach/detach in VTLS filters that could lead to crashes
during parallel transfers.
- Eliminating setup() methods in cfilters, no longer needed.
- Improving Curl_conn_is_alive() to replace Curl_connalive() and
integrated ssl alive checks into cfilter.
- Adding CF_CNTRL_CONN_INFO_UPDATE to tell filters to update
connection into and persist it at the easy handle.
- Several more cfilter related cleanups and moves:
- stream_weigth and dependency info is now wrapped in struct
Curl_data_priority
- Curl_data_priority members depend is available in HTTP2|HTTP3
- Curl_data_priority members depend on NGHTTP2 support
- handling init/reset/cleanup of priority part of url.c
- data->state.priority same struct, but shallow copy for compares only
- PROTOPT_STREAM has been removed
- Curl_conn_is_mulitplex() now available to check on capability
- Adding query method to connection filters.
- ngtcp2+quiche: implementing query for max concurrent transfers.
- Adding is_alive and keep_alive cfilter methods. Adding DATA_SETUP event.
- setting keepalive timestamp on connect
- DATA_SETUP is called after the connection has been completely
setup (but may not connected yet) to allow filters to initialize
data members they use.
- there is no socket to be had with msh3, it is unclear how select
shall work
- manual test via "curl --http3 https://curl.se" fail with "empty
reply from server".
- Various socket/conn related cleanups:
- Curl_socket is now Curl_socket_open and in cf-socket.c
- Curl_closesocket is now Curl_socket_close and in cf-socket.c
- Curl_ssl_use has been replaced with Cur_conn_is_ssl
- Curl_conn_tcp_accepted_set has been split into
Curl_conn_tcp_listen_set and Curl_conn_tcp_accepted_set
with a clearer purpose
Closes#10141
Fix various uses of connnect by replacing them with connect.
Closes: #10045
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
curl_ws_recv() now receives data to fill up the provided buffer, but can
return a partial fragment. The function now also get a pointer to a
curl_ws_frame struct with metadata that also mentions the offset and
total size of the fragment (of which you might be receiving a smaller
piece). This way, large incoming fragments will be "streamed" to the
application. When the curl_ws_frame struct field 'bytesleft' is 0, the
final fragment piece has been delivered.
curl_ws_recv() was also adjusted to work with a buffer size smaller than
the fragment size. (Possibly needless to say as the fragment size can
now be 63 bit large).
curl_ws_send() now supports sending a piece of a fragment, in a
streaming manner, in addition to sending the entire fragment in a single
call if it is small enough. To send a huge fragment, curl_ws_send() can
be used to send it in many small calls by first telling libcurl about
the total expected fragment size, and then send the payload in N number
of separate invokes and libcurl will stream those over the wire.
The struct curl_ws_meta() returns is now called 'curl_ws_frame' and it
has been extended with two new fields: *offset* and *bytesleft*. To help
describe the passed on data chunk when a fragment is delivered in many
smaller pieces.
The documentation has been updated accordingly.
Closes#9636
This function is currently located in the lib/http.c module and is
therefore disabled by the CURL_DISABLE_HTTP conditional token.
As it may be called by TLS backends, disabling HTTP results in an
undefined reference error at link time.
Move this function to vauth/vauth.c to always provide it and rename it
as Curl_auth_allowed_to_host() to respect the vauth module naming
convention.
Closes#9600
Add licensing and copyright information for all files in this repository. This
either happens in the file itself as a comment header or in the file
`.reuse/dep5`.
This commit also adds a Github workflow to check pull requests and adapts
copyright.pl to the changes.
Closes#8869
Also add STRCONST, a macro that returns a string literal and it's length
for functions that take "string,len"
Removes unnecesary calls to strlen().
Closes#8391
This is done by having native code do the haproxy header output before
hyper issues its request. The little downside with this approach is that
we need the entire Curl_buffer_send() function built, which is otherwise
not used for hyper builds.
If hyper ends up getting native support for the haproxy protocols we can
backpedal on this.
Enables test 1455 and 1456
Closes#8034
- Make content length (ie download size) accessible to the user in the
header callback, but only after all headers have been processed (ie
only in the final call to the header callback).
Background:
For a long time the content length could be retrieved in the header
callback via CURLINFO_CONTENT_LENGTH_DOWNLOAD_T as soon as it was parsed
by curl.
Changes were made in 8a16e54 (precedes 7.79.0) to ignore content length
if any transfer encoding is used. A side effect of that was that
content length was not set by libcurl until after the header callback
was called the final time, because until all headers are processed it
cannot be determined if content length is valid.
This change keeps the same intention --all headers must be processed--
but now the content length is available before the final call to the
header function that indicates all headers have been processed (ie
a blank header).
Bug: https://github.com/curl/curl/commit/8a16e54#r57374914
Reported-by: sergio-nsk@users.noreply.github.com
Co-authored-by: Daniel Stenberg
Fixes https://github.com/curl/curl/issues/7804
Closes https://github.com/curl/curl/pull/7803