When ending an FTP upload, we shut down the connection gracefully, since
the server should be notified we had send all bytes. Mostly, this is a
NOP without TLS involved. With TLS, close-notify messages should be
exchanged.
As reported in #14843, not all servers seem to do that. Since it is the
server's responsiblity to check it has received everything, we just log
the timeout and proceed as if everything is fine.
In the receive direction, we still fail the transfer if the server does
not shut down its direction properly.
Fixes#14843
Reported-by: Rasmus Melchior Jacobsen
Closes#14848
- Renames Curl_readwrite() to Curl_sendrecv() to reflect that it
is mainly about talking to the server, not reads or writes to the
client. Add a `nowp` parameter since the single caller already
has this.
- Curl_sendrecv() now runs all possible operations whenever it is
called and either it had been polling sockets or the 'select_bits'
are set.
POLL_IN/POLL_OUT are not always directly related to send/recv
operations. Filters like HTTP/2, QUIC or TLS may monitor reverse
directions. If a transfer does not want to send (KEEP_SEND), it
will not do so, as before. Same for receives.
- Curl_update_timer() now checks the absolute timestamp of an expiry
and the last/new timeout to determine if the application needs
to stop/start/restart its timer. This fixes edge cases where
updates did not happen as they should have.
- improved --test-event curl_easy_perform() simulation to handle
situations where no sockets are registered but a timeout is
in place.
- fixed bug in events_socket() that complained about removing
a socket that was unknown, when indeed it had removed the socket
just before, only it was the last in the list
- fixed conncache's internal handle to carry the multi instance
(where the cache has one) so that operations on the closure handle
trigger event callbacks correctly.
- fixed conncache to not POLL_REMOVE a socket twice when a conneciton
was closed.
Closes#14561
Since data can be held in connection filter buffers when sending gives
EAGAIN, add methods to query this and perform flushing of those buffers.
The transfer loop will continue sending until all upload data is
processed and the connection is flushed.
- add `CF_QUERY_SEND_PENDING` to query filters
- add `CF_CTRL_DATA_SEND_FLUSH` to flush filters
- change `Curl_req_want_send()` to query the connection
if it needs flushing
- use `Curl_req_want_send()` to determine the POLLOUT
in the PERFORMING multi state
- implement flush handling in the HTTP/2 connection filter
Closes#14271
Adds a `bool eos` flag to send methods to indicate that the data
is the last chunk the invovled transfer wants to send to the server.
This will help protocol filters like HTTP/2 and 3 to forward the
stream's EOF flag and also allow to EAGAIN such calls when buffers
are not yet fully flushed.
Closes#14220
Adds a `bool eos` flag to send methods to indicate that the data is the
last chunk the invovled transfer wants to send to the server.
This will help protocol filters like HTTP/2 and 3 to forward the
stream's EOF flag and also allow to EAGAIN such calls when buffers are
not yet fully flushed.
Closes#14220
- When a transfer sets `data->state.select_bits`, it is
scheduled for rerun with EXPIRE_NOW. If such a transfer
is blocked (due to PAUSE, for example), this will lead to
a busy loop.
- multi.c: check for transfer block
- sendf.*: add Curl_xfer_is_blocked()
- sendf.*: add client reader `is_paused()` callback
- implement is_paused()` callback where needed
Closes#13908
- clarify Curl_xfer_setup() with RECV/SEND flags and different calls for
which socket they operate on. Add a shutdown flag for secondary
sockets
- change Curl_xfer_setup() calls to new functions
- implement non-blocking connection shutdown at the end of receiving or
sending a transfer
Closes#13913
A transfer may do several `SingleRequest`s for its success. This happens
regularly for authentication, follows and retries on failed connections.
The "readwrite()" calls and functions connected to those carried a `bool
*done` parameter to indicate that the current `SingleRequest` is over.
This may happen before `upload_done` or `download_done` bits of
`SingleRequest` are set.
The problem with that is now `write_resp()` protocol handlers are
invoked in places where the `bool *done` cannot be passed up to the
caller. Instead of being a bool in the call chain, it needs to become a
member of `SingleRequest`, reflecting its state.
This removes the `bool *done` parameter and adds the `done` bit to
`SingleRequest` instead. It adds `Curl_req_soft_reset()` for using a
`SingleRequest` in a follow up, clearing `done` and other
flags/counters.
Closes#13096
- Move all the "upload_done" handling to request.c
- add possibility to abort sending of a request
- add `Curl_req_done_sending()` for checks
- transfer.c: readwrite_upload() now clean
- removing data->state.ulbuf and data->req.upload_fromhere
- as well as data->req.upload_present
- set data->req.upload_done on having read all from
the client and completely flushed the send buffer
- tftp, remove setting of data->req.upload_fromhere
- serves no purpose as `upload_present` is not set
and the data itself is directly `sendto()` anyway
- smtp, make upload EOB conversion a client reader
- xfer_ulbuf addition
- add xfer_ulbuf for borrowing, similar to xfer_buf
- use in file upload
- use in c-hyper body sending
- h1-proxy, remove init of data->state.uilbuf that is never used
- smb, add own send_buf instead of using data->state.ulbuf
Closes#13010
- replace `Curl_read()`, `Curl_write()` and `Curl_nwrite()` to
clarify when and at what level they operate
- send/recv of transfer related data is now done via
`Curl_xfer_send()/Curl_xfer_recv()` which no longer has
socket/socketindex as parameter. It decides on the transfer
setup of `conn->sockfd` and `conn->writesockfd` on which
connection filter chain to operate.
- send/recv on a specific connection filter chain is done via
`Curl_conn_send()/Curl_conn_recv()` which get the socket index
as parameter.
- rename `Curl_setup_transfer()` to `Curl_xfer_setup()` for
naming consistency
- clarify that the special CURLE_AGAIN hangling to return
`CURLE_OK` with length 0 only applies to `Curl_xfer_send()`
and CURLE_AGAIN is returned by all other send() variants.
- fix a bug in websocket `curl_ws_recv()` that mixed up data
when it arrived in more than a single chunk (to be made
into a sperate PR, also)
Added as documented [in
CLIENT-READER.md](5b1f31dfba/docs/CLIENT-READERS.md).
- old `Curl_buffer_send()` completely replaced by new `Curl_req_send()`
- old `Curl_fillreadbuffer()` replaced with `Curl_client_read()`
- HTTP chunked uploads are now formatted in a client reader added when
needed.
- FTP line-end conversions are done in a client reader added when
needed.
- when sending requests headers, remaining buffer space is filled with
body data for sending in "one go". This is independent of the request
body size. Resolves#12938 as now small and large requests have the
same code path.
Changes done to test cases:
- test513: now fails before sending request headers as this initial
"client read" triggers the setup fault. Behaves now the same as in
hyper build
- test547, test555, test1620: fix the length check in the lib code to
only fail for reads *smaller* than expected. This was a bug in the
test code that never triggered in the old implementation.
Closes#12969
- replace `Curl_read()`, `Curl_write()` and `Curl_nwrite()` to
clarify when and at what level they operate
- send/recv of transfer related data is now done via
`Curl_xfer_send()/Curl_xfer_recv()` which no longer has
socket/socketindex as parameter. It decides on the transfer
setup of `conn->sockfd` and `conn->writesockfd` on which
connection filter chain to operate.
- send/recv on a specific connection filter chain is done via
`Curl_conn_send()/Curl_conn_recv()` which get the socket index
as parameter.
- rename `Curl_setup_transfer()` to `Curl_xfer_setup()` for
naming consistency
- clarify that the special CURLE_AGAIN hangling to return
`CURLE_OK` with length 0 only applies to `Curl_xfer_send()`
and CURLE_AGAIN is returned by all other send() variants.
- fix a bug in websocket `curl_ws_recv()` that mixed up data
when it arrived in more than a single chunk
The method for sending not just raw bytes, but bytes that are either
"headers" or "body". The send abstraction stack, to to bottom, now is:
* `Curl_req_send()`: has parameter to indicate amount of header bytes,
buffers all data.
* `Curl_xfer_send()`: knows on which socket index to send, returns
amount of bytes sent.
* `Curl_conn_send()`: called with socket index, returns amount of bytes
sent.
In addition there is `Curl_req_flush()` for writing out all buffered
bytes.
`Curl_req_send()` is active for requests without body,
`Curl_buffer_send()` still being used for others. This is because the
special quirks need to be addressed in future parts:
* `expect-100` handling
* `Curl_fillreadbuffer()` needs to add directly to the new
`data->req.sendbuf`
* special body handlings, like `chunked` encodings and line end
conversions will be moved into something like a Client Reader.
In functions of the pattern `CURLcode xxx_send(..., ssize_t *written)`,
replace the `ssize_t` with a `size_t`. It makes no sense to allow for negative
values as the returned `CURLcode` already specifies error conditions. This
allows easier handling of lengths without casting.
Closes#12964
Curl_read/Curl_write clarifications
- replace `Curl_read()`, `Curl_write()` and `Curl_nwrite()` to 1clarify
when and at what level they operate
- send/recv of transfer related data is now done via
`Curl_xfer_send()/Curl_xfer_recv()` which no longer has
socket/socketindex as parameter. It decides on the transfer setup of
`conn->sockfd` and `conn->writesockfd` on which connection filter
chain to operate.
- send/recv on a specific connection filter chain is done via
`Curl_conn_send()/Curl_conn_recv()` which get the socket index as
parameter.
- rename `Curl_setup_transfer()` to `Curl_xfer_setup()` for naming
consistency
- clarify that the special CURLE_AGAIN handling to return `CURLE_OK`
with length 0 only applies to `Curl_xfer_send()` and CURLE_AGAIN is
returned by all other send() variants.
SingleRequest reshuffling
- move functions into request.[ch]
- differentiate between reset and free
- add Curl_req_done() to perform last actions
- add a send `bufq` to SingleRequest for future use in keeping upload data
Closes#12963
Refactoring of the client writer that passes the data to the
client/application's callback functions.
- split out into own source cw-out.[ch] from sendf.c
- move tempwrite and tempcount from data->state into the context of the
client writer
- redesign the 3 tempwrite dynbufs as a linked list of dynbufs. On
paused transfers, this allows to "record" interleaved HEADER/BODY
chunks to be "played back" in the same order on unpausing.
- keep the overall size limit of all buffered data to DYN_PAUSE_BUFFER.
On exceeding that, return CURLE_TOO_LARGE instead of
CURLE_OUT_OF_MEMORY as before.
- add method to be called when a transfer is DONE to allow writing of
any data still buffered
- when paused, record HEADER writes exactly as they come for later
playback. HEADERs are documented to be written one-by-one.
Closes#12898
This clarifies the handling of server responses by folding the code for
the complicated protocols into their protocol handlers. This concerns
mainly HTTP and its bastard sibling RTSP.
The terms "read" and "write" are often used without clear context if
they refer to the connect or the client/application side of a
transfer. This PR uses "read/write" for operations on the client side
and "send/receive" for the connection, e.g. server side. If this is
considered useful, we can revisit renaming of further methods in another
PR.
Curl's protocol handler `readwrite()` method been changed:
```diff
- CURLcode (*readwrite)(struct Curl_easy *data, struct connectdata *conn,
- const char *buf, size_t blen,
- size_t *pconsumed, bool *readmore);
+ CURLcode (*write_resp)(struct Curl_easy *data, const char *buf, size_t blen,
+ bool is_eos, bool *done);
```
The name was changed to clarify that this writes reponse data to the
client side. The parameter changes are:
* `conn` removed as it always operates on `data->conn`
* `pconsumed` removed as the method needs to handle all data on success
* `readmore` removed as no longer necessary
* `is_eos` as indicator that this is the last call for the transfer
response (end-of-stream).
* `done` TRUE on return iff the transfer response is to be treated as
finished
This change affects many files only because of updated comments in
handlers that provide no implementation. The real change is that the
HTTP protocol handlers now provide an implementation.
The HTTP protocol handlers `write_resp()` implementation will get passed
**all** raw data of a server response for the transfer. The HTTP/1.x
formatted status and headers, as well as the undecoded response
body. `Curl_http_write_resp_hds()` is used internally to parse the
response headers and pass them on. This method is public as the RTSP
protocol handler also uses it.
HTTP/1.1 "chunked" transport encoding is now part of the general
*content encoding* writer stack, just like other encodings. A new flag
`CLIENTWRITE_EOS` was added for the last client write. This allows
writers to verify that they are in a valid end state. The chunked
decoder will check if it indeed has seen the last chunk.
The general response handling in `transfer.c:466` happens in function
`readwrite_data()`. This mainly operates now like:
```
static CURLcode readwrite_data(data, ...)
{
do {
Curl_xfer_recv_resp(data, buf)
...
Curl_xfer_write_resp(data, buf)
...
} while(interested);
...
}
```
All the response data handling is implemented in
`Curl_xfer_write_resp()`. It calls the protocol handler's `write_resp()`
implementation if available, or does the default behaviour.
All raw response data needs to pass through this function. Which also
means that anyone in possession of such data may call
`Curl_xfer_write_resp()`.
Closes#12480
- let `multi_getsock()` initialize the pollset in what the
transfer state requires in regards to SEND/RECV
- change connection filters `adjust_pollset()` implementation
to react on the presence of POLLIN/-OUT in the pollset and
no longer check CURL_WANT_SEND/CURL_WANT_RECV
- cf-socket will no longer add POLLIN on its own
- http2 and http/3 filters will only do adjustments if the
passed pollset wants to POLLIN/OUT for the transfer on
the socket. This is similar to the HTTP/2 proxy filter
and works in stacked filters.
Closes#12640
- use `data->state.dselect_bits` everywhere instead
- remove `bool *comeback` parameter as non-zero
`data->state.dselect_bits` will indicate that IO is
incomplete.
Closes#12512
- they are mostly pointless in all major jurisdictions
- many big corporations and projects already don't use them
- saves us from pointless churn
- git keeps history for us
- the year range is kept in COPYING
checksrc is updated to allow non-year using copyright statements
Closes#10205
This makes a big difference for cases when the rewind is not actually
necessary to perofm (for example HTTP response code 301 converts to GET)
and therefore the rewind can be avoided. In particular for situations
when that rewind fails, for example when reading from a pipe or similar.
Reported-by: Ali Utku Selen
Fixes#9735Closes#9958
Add licensing and copyright information for all files in this repository. This
either happens in the file itself as a comment header or in the file
`.reuse/dep5`.
This commit also adds a Github workflow to check pull requests and adapts
copyright.pl to the changes.
Closes#8869
... in most cases instead of 'struct connectdata *' but in some cases in
addition to.
- We mostly operate on transfers and not connections.
- We need the transfer handle to log, store data and more. Everything in
libcurl is driven by a transfer (the CURL * in the public API).
- This work clarifies and separates the transfers from the connections
better.
- We should avoid "conn->data". Since individual connections can be used
by many transfers when multiplexing, making sure that conn->data
points to the current and correct transfer at all times is difficult
and has been notoriously error-prone over the years. The goal is to
ultimately remove the conn->data pointer for this reason.
Closes#6425
When doing HTTP authentication and a port number set with CURLOPT_PORT,
the code would previously have the URL's port number override as if it
had been a redirect to an absolute URL.
Added test 1568 to verify.
Reported-by: UrsusArctos on github
Fixes#6397Closes#6400
It was used (intended) to pass in the size of the 'socks' array that is
also passed to these functions, but was rarely actually checked/used and
the array is defined to a fixed size of MAX_SOCKSPEREASYHANDLE entries
that should be used instead.
Closes#4169
There were a leftover few prototypes of Curl_ functions that we used to
export but no longer do, this removes those prototypes and cleans up any
comments still referring to them.
Curl_write32_le(), Curl_strcpy_url(), Curl_strlen_url(), Curl_up_free()
Curl_concat_url(), Curl_detach_connnection(), Curl_http_setup_conn()
were made static in 05b100aee2.
Curl_http_perhapsrewind() made static in 574aecee20.
For the remainder, I didn't trawl the Git logs hard enough to capture
their exact time of deletion, but they were all gone: Curl_splayprint(),
Curl_http2_send_request(), Curl_global_host_cache_dtor(),
Curl_scan_cache_used(), Curl_hostcache_destroy(), Curl_second_connect(),
Curl_http_auth_stage() and Curl_close_connections().
Closes#4096
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
To make sure a HTTP/2 stream registers the end of stream.
Bug #4043 made me find this problem but this fix doesn't correct the
reported issue.
Closes#4068
- no need to have them protocol specific
- no need to set pointers to them with the Curl_setup_transfer() call
- make Curl_setup_transfer() operate on a transfer pointer, not
connection
- switch some counters from long to the more proper curl_off_t type
Closes#3627
- replace tabs with spaces where possible
- remove line ending spaces
- remove double/triple newlines at EOF
- fix a non-UTF-8 character
- cleanup a few indentations/line continuations
in manual examples
Closes https://github.com/curl/curl/pull/3037
Speed limits (from CURLOPT_MAX_RECV_SPEED_LARGE &
CURLOPT_MAX_SEND_SPEED_LARGE) were applied simply by comparing limits
with the cumulative average speed of the entire transfer; While this
might work at times with good/constant connections, in other cases it
can result to the limits simply being "ignored" for more than "short
bursts" (as told in man page).
Consider a download that goes on much slower than the limit for some
time (because bandwidth is used elsewhere, server is slow, whatever the
reason), then once things get better, curl would simply ignore the limit
up until the average speed (since the beginning of the transfer) reached
the limit. This could prove the limit useless to effectively avoid
using the entire bandwidth (at least for quite some time).
So instead, we now use a "moving starting point" as reference, and every
time at least as much as the limit as been transferred, we can reset
this starting point to the current position. This gets a good limiting
effect that applies to the "current speed" with instant reactivity (in
case of sudden speed burst).
Closes#971
Regression added in 790d6de485. The was then added to avoid one
particular transfer to starve out others. But when aborting due to
reading the maxcount, the connection must be marked to be read from
again without first doing a select as for some protocols (like SFTP/SCP)
the data may already have been read off the socket.
Reported-by: Dan Donahue
Bug: https://curl.haxx.se/mail/lib-2016-07/0057.html
... and assign it from the set.fread_func_set pointer in the
Curl_init_CONNECT function. This A) avoids that we have code that
assigns fields in the 'set' struct (which we always knew was bad) and
more importantly B) it makes it impossibly to accidentally leave the
wrong value for when the handle is re-used etc.
Introducing a state-init functionality in multi.c, so that we can set a
specific function to get called when we enter a state. The
Curl_init_CONNECT is thus called when switching to the CONNECT state.
Bug: https://github.com/bagder/curl/issues/346Closes#346
With many easy handles using the same connection for multiplexing, it is
important we store and keep the transfer-oriented stuff in the
SessionHandle so that callbacks and callback data work fine even when
many easy handles share the same physical connection.
This reverts renaming and usage of lib/*.h header files done
28-12-2012, reverting 2 commits:
f871de0... build: make use of 76 lib/*.h renamed files
ffd8e12... build: rename 76 lib/*.h files
This also reverts removal of redundant include guard (redundant thanks
to changes in above commits) done 2-12-2013, reverting 1 commit:
c087374... curl_setup.h: remove redundant include guard
This also reverts renaming and usage of lib/*.c source files done
3-12-2013, reverting 3 commits:
13606bb... build: make use of 93 lib/*.c renamed files
5b6e792... build: rename 93 lib/*.c files
7d83dff... build: commit 13606bbfde follow-up 1
Start of related discussion thread:
http://curl.haxx.se/mail/lib-2013-01/0012.html
Asking for confirmation on pushing this revertion commit:
http://curl.haxx.se/mail/lib-2013-01/0048.html
Confirmation summary:
http://curl.haxx.se/mail/lib-2013-01/0079.html
NOTICE: The list of 2 files that have been modified by other
intermixed commits, while renamed, and also by at least one
of the 6 commits this one reverts follows below. These 2 files
will exhibit a hole in history unless git's '--follow' option
is used when viewing logs.
lib/curl_imap.h
lib/curl_smtp.h