Add `mime` client reader. Encapsulates reading from mime parts, getting
their length, rewinding and unpausing.
- remove special mime handling from sendf.c and easy.c
- add general "unpause" method to client readers
- use new reader in http/imap/smtp
- make some mime functions static that are now only used internally
In addition:
- remove flag 'forbidchunk' as no longer needed
Closes#13039
- set TIMER_STARTTRANSFER on seeing the first response bytes
in the download client writer, not coming from a CONNECT
- initialized the timer the same way for all protocols
- remove explicit setting of TIMER_STARTTRANSFER in file.c
and c-hyper.c
Closes#13052
If a response without a status line is received, and the connection is
known to use HTTP/1.x (not HTTP/0.9), report the error "Invalid status
line" instead of "Received HTTP/0.9 when not allowed".
Closes#13045
In cases where the connection was fast, curl sometimes failed to open a
connection. This fixes a regression of c2d973627bab12abc5486a3f3.
The regression triggered in these steps:
1. Create an smtp connection
2. Use STARTTLS
3. Receive the response
4. We are inside the loop in `smtp_statemachine`, calling
`smtp_state_starttls_resp`
5. In the good flow, we exit the loop, re-enter `smtp_statemachine` and
run `smtp_perform_upgrade_tls` at the start of the function.
In the bad flow, we stay in the while loop, calling
`Curl_pp_readresp`, which reads part of the TLS handshake and things
go wrong.
The reason is that `Curl_pp_moredata` changed behavior and always
returns `true`, so we stay in the loop in `smtp_statemachine`. With a
slow connection `Curl_pp_readresp` cannot read new data and returns
`CURL_AGAIN`, so we leave the loop and re-enter `smtp_statemachine`.
With a fast connection, `Curl_pp_readresp` reads new data from the tcp
connection, which is part of the TLS handshake.
The fix is in `Curl_pp_moredata`, which needs to take the final line
into account and return `false` if only the final line is stored.
Closes#13048
- update client reader documentation
- client reader, add rewind capabilities
- tell creader to rewind on next start
- Curl_client_reset() will keep reader for future rewind if requested
- add Curl_client_cleanup() for freeing all resources independent of
rewinds
- add Curl_client_start() to trigger rewinds
- move rewind code from multi.c to sendf.c and make part of
"cr-in"'s implementation
- http, move the "resume_from" handling into the client readers
- the setup of a HTTP request is reshuffled to follow:
* determine method, target, auth negotiation
* install the client reader(s) for the request, including crlf
conversions and "chunked" encoding
* apply ranges to client reader
* concat request headers, upgrades, cookies, etc.
* complete request by determining Content-Length of installed
readers in combination with method
* send
- add methods for client readers to
* return the overall length they will generate (or -1 when unknown)
* return the amount of data on the CLIENT level, so that
expect-100 can decide if it want to apply itself
* set a "resume_from" offset or fail if unsupported
- struct HTTP has become largely empty now
- rename `Client_reader_*` to `Curl_creader_*`
Closes#13026
Caused by an accidentally duplicated line in
d6825df334def106f735ce7e0c1a2ea87bddffb0.
```
.../lib/vquic/curl_osslq.c:1095:30: warning: implicit conversion loses integer precision: 'curl_socket_t' (aka 'unsigned long long') to 'int' [-Wshorten-64-to-32]
1095 | bio = BIO_new_dgram(ctx->q.sockfd, BIO_NOCLOSE);
| ~~~~~~~~~~~~~ ~~~~~~~^~~~~~
1 warning and 2 errors generated.
```
Reviewed-by: Stefan Eissing
Closes#13043
- rename static functions to avoid duplicate symbols in unity mode.
- windows -> Windows/window in error message and comment.
- fix indentation.
Reviewed-by: Stefan Eissing
Closes#13044
The function that replaces occurances of "--longoption" with "-Z,
--longoption" etc with the proper highlight applied, no longer loops
over the options.
Closes#13041
- pytest has changed the signature of the hook pytest_report_header()
for some obscure reason and that change landed in our CI now
- remove the changed param that we never used anyway
Closes#13037
A libpsl install without data and no built-in database is now considered
bad enough to reject all cookies since they cannot be checked. It is
somewhat of a user error, but still.
Reported-by: Dan Fandrich
Closes#13033
- Move all the "upload_done" handling to request.c
- add possibility to abort sending of a request
- add `Curl_req_done_sending()` for checks
- transfer.c: readwrite_upload() now clean
- removing data->state.ulbuf and data->req.upload_fromhere
- as well as data->req.upload_present
- set data->req.upload_done on having read all from
the client and completely flushed the send buffer
- tftp, remove setting of data->req.upload_fromhere
- serves no purpose as `upload_present` is not set
and the data itself is directly `sendto()` anyway
- smtp, make upload EOB conversion a client reader
- xfer_ulbuf addition
- add xfer_ulbuf for borrowing, similar to xfer_buf
- use in file upload
- use in c-hyper body sending
- h1-proxy, remove init of data->state.uilbuf that is never used
- smb, add own send_buf instead of using data->state.ulbuf
Closes#13010
- when unable to obtain a new chunk on a softlimit bufq,
this is an allocation error and needs to be reported as
such.
- writes into a soflimit bufq never must be partial success
Reported-by: Dan Fandrich
Fixes#13020Closes#13023
With the recent changes to completion file building, the files were
built always and only installation was selectively disabled. Now, when
they are disabled they aren't even built, avoiding a build-time error in
environments where it's not possible to run the curl binary that was
just created (e.g. if library paths were not set up correctly).
Follow-up to 0f7aba83c
Reported-by: av223119 on github
Fixes#13027Closes#13030
The code that attempted to skip building the shell completions didn't
work properly and tried to build them even if perl wasn't available.
This step, as well as the install step, is now properly skipped without
perl.
Follow-up to 89733e2dd
Closes#13022
This fixes miscellaneous typos and duplicated words in the docs, lib
and test comments and a few user facing errorstrings.
Author: RainRat on Github
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Dan Fandrich <dan@coneharvesters.com>
Closes: #13019
The --with-fish-functions-dir and --with-zsh-functions-dir options
currently have no effect on a normal build because the scripts/ directory
where they're used is not built. Add scripts/ to a normal build and
change the completion options to default to off to preserve the existing
behaviour.
Closes: #12906
- replace `Curl_read()`, `Curl_write()` and `Curl_nwrite()` to
clarify when and at what level they operate
- send/recv of transfer related data is now done via
`Curl_xfer_send()/Curl_xfer_recv()` which no longer has
socket/socketindex as parameter. It decides on the transfer
setup of `conn->sockfd` and `conn->writesockfd` on which
connection filter chain to operate.
- send/recv on a specific connection filter chain is done via
`Curl_conn_send()/Curl_conn_recv()` which get the socket index
as parameter.
- rename `Curl_setup_transfer()` to `Curl_xfer_setup()` for
naming consistency
- clarify that the special CURLE_AGAIN hangling to return
`CURLE_OK` with length 0 only applies to `Curl_xfer_send()`
and CURLE_AGAIN is returned by all other send() variants.
- fix a bug in websocket `curl_ws_recv()` that mixed up data
when it arrived in more than a single chunk (to be made
into a sperate PR, also)
Added as documented [in
CLIENT-READER.md](5b1f31dfba/docs/CLIENT-READERS.md).
- old `Curl_buffer_send()` completely replaced by new `Curl_req_send()`
- old `Curl_fillreadbuffer()` replaced with `Curl_client_read()`
- HTTP chunked uploads are now formatted in a client reader added when
needed.
- FTP line-end conversions are done in a client reader added when
needed.
- when sending requests headers, remaining buffer space is filled with
body data for sending in "one go". This is independent of the request
body size. Resolves#12938 as now small and large requests have the
same code path.
Changes done to test cases:
- test513: now fails before sending request headers as this initial
"client read" triggers the setup fault. Behaves now the same as in
hyper build
- test547, test555, test1620: fix the length check in the lib code to
only fail for reads *smaller* than expected. This was a bug in the
test code that never triggered in the old implementation.
Closes#12969
The curldown conversion accidentally replaced daniel@haxx.se with
just daniel.se. This reverts back to the proper email address in
the curldown docs as well as in a few other stray places where it
was incorrect (while unrelated to curldown).
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Closes: #12997
When disabling all protocols without enabling any, the resulting
set of allowed protocols remained the default set. Clearing the
allowed set before inspecting the passed value from --proto make
the set empty even in the errorpath of no protocols enabled.
Co-authored-by: Dan Fandrich <dan@telarity.com>
Reported-by: Dan Fandrich <dan@telarity.com>
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Closes: #13004
This was fixed in commit 06dc599405f, but came back in commit
03cb1ff4d62.
When building for 32-bit ARM or x86 Android, `st_mode` is defined as
`unsigned int` instead of `mode_t`, resulting in a
`-Wimplicit-int-conversion` clang warning because `mode_t` is
`unsigned short`. Add a cast to silence the warning, but only for
32-bit Android builds, because other architectures and platforms are
not affected.
Ref: https://android.googlesource.com/platform/bionic/+/refs/tags/ndk-r25c/libc/include/sys/stat.h#86
Closes https://github.com/curl/curl/pull/12998
- replace `Curl_read()`, `Curl_write()` and `Curl_nwrite()` to
clarify when and at what level they operate
- send/recv of transfer related data is now done via
`Curl_xfer_send()/Curl_xfer_recv()` which no longer has
socket/socketindex as parameter. It decides on the transfer
setup of `conn->sockfd` and `conn->writesockfd` on which
connection filter chain to operate.
- send/recv on a specific connection filter chain is done via
`Curl_conn_send()/Curl_conn_recv()` which get the socket index
as parameter.
- rename `Curl_setup_transfer()` to `Curl_xfer_setup()` for
naming consistency
- clarify that the special CURLE_AGAIN hangling to return
`CURLE_OK` with length 0 only applies to `Curl_xfer_send()`
and CURLE_AGAIN is returned by all other send() variants.
- fix a bug in websocket `curl_ws_recv()` that mixed up data
when it arrived in more than a single chunk
The method for sending not just raw bytes, but bytes that are either
"headers" or "body". The send abstraction stack, to to bottom, now is:
* `Curl_req_send()`: has parameter to indicate amount of header bytes,
buffers all data.
* `Curl_xfer_send()`: knows on which socket index to send, returns
amount of bytes sent.
* `Curl_conn_send()`: called with socket index, returns amount of bytes
sent.
In addition there is `Curl_req_flush()` for writing out all buffered
bytes.
`Curl_req_send()` is active for requests without body,
`Curl_buffer_send()` still being used for others. This is because the
special quirks need to be addressed in future parts:
* `expect-100` handling
* `Curl_fillreadbuffer()` needs to add directly to the new
`data->req.sendbuf`
* special body handlings, like `chunked` encodings and line end
conversions will be moved into something like a Client Reader.
In functions of the pattern `CURLcode xxx_send(..., ssize_t *written)`,
replace the `ssize_t` with a `size_t`. It makes no sense to allow for negative
values as the returned `CURLcode` already specifies error conditions. This
allows easier handling of lengths without casting.
Closes#12964
If the easy handle that is being added to a multi handle has previously
been used for curl_easy_perform(), there is a private multi handle here
that we can kill off. While it flushes some caches etc for the easy
handle would it be used for an easy interface transfer again after being
used in the multi stack, this cleanup simplifies behavior and uses less
memory.
Closes#12992