docs: use present tense

avoid "will", detect "will" as a bad word in the CI

Also line wrapped a bunch of paragraphs

Closes #13001
This commit is contained in:
Daniel Stenberg 2024-02-27 07:48:10 +01:00
parent f73cb3ebd2
commit 2097a095c9
No known key found for this signature in database
GPG Key ID: 5CC908FDB71E12C2
45 changed files with 621 additions and 538 deletions

View File

@ -47,3 +47,4 @@ didn't:did not
doesn't:does not
won't:will not
couldn't:could not
\bwill\b:rewrite to present tense

View File

@ -13,12 +13,12 @@ as many internal Curl read and write ones.
ssize_t Curl_bufq_write(struct bufq *q, const unsigned char *buf, size_t len, CURLcode *err);
- returns the length written into `q` or -1 on error.
- writing to a full `q` will return -1 and set *err to CURLE_AGAIN
- writing to a full `q` returns -1 and set *err to CURLE_AGAIN
ssize_t Curl_bufq_read(struct bufq *q, unsigned char *buf, size_t len, CURLcode *err);
- returns the length read from `q` or -1 on error.
- reading from an empty `q` will return -1 and set *err to CURLE_AGAIN
- reading from an empty `q` returns -1 and set *err to CURLE_AGAIN
```
@ -32,10 +32,11 @@ ssize_t Curl_bufq_slurp(struct bufq *q, Curl_bufq_reader *reader, void *reader_c
CURLcode *err);
```
`Curl_bufq_slurp()` will invoke the given `reader` callback, passing it its own internal
buffer memory to write to. It may invoke the `reader` several times, as long as it has space
and while the `reader` always returns the length that was requested. There are variations of `slurp` that call the `reader` at most once or only read in a
maximum amount of bytes.
`Curl_bufq_slurp()` invokes the given `reader` callback, passing it its own
internal buffer memory to write to. It may invoke the `reader` several times,
as long as it has space and while the `reader` always returns the length that
was requested. There are variations of `slurp` that call the `reader` at most
once or only read in a maximum amount of bytes.
The analog mechanism for write out buffer data is:
@ -47,8 +48,8 @@ ssize_t Curl_bufq_pass(struct bufq *q, Curl_bufq_writer *writer, void *writer_ct
CURLcode *err);
```
`Curl_bufq_pass()` will invoke the `writer`, passing its internal memory and remove the
amount that `writer` reports.
`Curl_bufq_pass()` invokes the `writer`, passing its internal memory and
remove the amount that `writer` reports.
## peek and skip
@ -58,8 +59,8 @@ It is possible to get access to the memory of data stored in a `bufq` with:
bool Curl_bufq_peek(const struct bufq *q, const unsigned char **pbuf, size_t *plen);
```
On returning TRUE, `pbuf` will point to internal memory with `plen` bytes that one may read. This will only
be valid until another operation on `bufq` is performed.
On returning TRUE, `pbuf` points to internal memory with `plen` bytes that one
may read. This is only valid until another operation on `bufq` is performed.
Instead of reading `bufq` data, one may simply skip it:
@ -67,20 +68,22 @@ Instead of reading `bufq` data, one may simply skip it:
void Curl_bufq_skip(struct bufq *q, size_t amount);
```
This will remove `amount` number of bytes from the `bufq`.
This removes `amount` number of bytes from the `bufq`.
## lifetime
`bufq` is initialized and freed similar to the `dynbuf` module. Code using `bufq` will
hold a `struct bufq` somewhere. Before it uses it, it invokes:
`bufq` is initialized and freed similar to the `dynbuf` module. Code using
`bufq` holds a `struct bufq` somewhere. Before it uses it, it invokes:
```
void Curl_bufq_init(struct bufq *q, size_t chunk_size, size_t max_chunks);
```
The `bufq` is told how many "chunks" of data it shall hold at maximum and how large those
"chunks" should be. There are some variants of this, allowing for more options. How "chunks" are handled in a `bufq` is presented in the section about memory management.
The `bufq` is told how many "chunks" of data it shall hold at maximum and how
large those "chunks" should be. There are some variants of this, allowing for
more options. How "chunks" are handled in a `bufq` is presented in the section
about memory management.
The user of the `bufq` has the responsibility to call:
@ -95,25 +98,39 @@ void Curl_bufq_reset(struct bufq *q);
## memory management
Internally, a `bufq` uses allocation of fixed size, e.g. the "chunk_size", up to a maximum number, e.g. "max_chunks". These chunks are allocated on demand, therefore writing to a `bufq` may return `CURLE_OUT_OF_MEMORY`. Once the max number of chunks are used, the `bufq` will report that it is "full".
Internally, a `bufq` uses allocation of fixed size, e.g. the "chunk_size", up
to a maximum number, e.g. "max_chunks". These chunks are allocated on demand,
therefore writing to a `bufq` may return `CURLE_OUT_OF_MEMORY`. Once the max
number of chunks are used, the `bufq` reports that it is "full".
Each chunks has a `read` and `write` index. A `bufq` keeps its chunks in a list. Reading happens always at the head chunk, writing always goes to the tail chunk. When the head chunk becomes empty, it is removed. When the tail chunk becomes full, another chunk is added to the end of the list, becoming the new tail.
Each chunks has a `read` and `write` index. A `bufq` keeps its chunks in a
list. Reading happens always at the head chunk, writing always goes to the
tail chunk. When the head chunk becomes empty, it is removed. When the tail
chunk becomes full, another chunk is added to the end of the list, becoming
the new tail.
Chunks that are no longer used are returned to a `spare` list by default. If the `bufq` is created with option `BUFQ_OPT_NO_SPARES` those chunks will be freed right away.
Chunks that are no longer used are returned to a `spare` list by default. If
the `bufq` is created with option `BUFQ_OPT_NO_SPARES` those chunks are freed
right away.
If a `bufq` is created with a `bufc_pool`, the no longer used chunks are returned to the pool. Also `bufq` will ask the pool for a chunk when it needs one. More in section "pools".
If a `bufq` is created with a `bufc_pool`, the no longer used chunks are
returned to the pool. Also `bufq` asks the pool for a chunk when it needs one.
More in section "pools".
## empty, full and overflow
One can ask about the state of a `bufq` with methods such as `Curl_bufq_is_empty(q)`,
`Curl_bufq_is_full(q)`, etc. The amount of data held by a `bufq` is the sum of the data in all its chunks. This is what is reported by `Curl_bufq_len(q)`.
One can ask about the state of a `bufq` with methods such as
`Curl_bufq_is_empty(q)`, `Curl_bufq_is_full(q)`, etc. The amount of data held
by a `bufq` is the sum of the data in all its chunks. This is what is reported
by `Curl_bufq_len(q)`.
Note that a `bufq` length and it being "full" are only loosely related. A simple example:
Note that a `bufq` length and it being "full" are only loosely related. A
simple example:
* create a `bufq` with chunk_size=1000 and max_chunks=4.
* write 4000 bytes to it, it will report "full"
* read 1 bytes from it, it will still report "full"
* read 999 more bytes from it, and it will no longer be "full"
* write 4000 bytes to it, it reports "full"
* read 1 bytes from it, it still reports "full"
* read 999 more bytes from it, and it is no longer "full"
The reason for this is that full really means: *bufq uses max_chunks and the
last one cannot be written to*.
@ -123,16 +140,16 @@ hold 999 unread bytes. Only when those are also read, can the head chunk be
removed and a new tail be added.
There is another variation to this. If you initialized a `bufq` with option
`BUFQ_OPT_SOFT_LIMIT`, it will allow writes **beyond** the `max_chunks`. It
will report **full**, but one can **still** write. This option is necessary,
if partial writes need to be avoided. It means that you will need other checks
to keep the `bufq` from growing ever larger and larger.
`BUFQ_OPT_SOFT_LIMIT`, it allows writes **beyond** the `max_chunks`. It
reports **full**, but one can **still** write. This option is necessary, if
partial writes need to be avoided. It means that you need other checks to keep
the `bufq` from growing ever larger and larger.
## pools
A `struct bufc_pool` may be used to create chunks for a `bufq` and keep spare ones around. It is initialized
and used via:
A `struct bufc_pool` may be used to create chunks for a `bufq` and keep spare
ones around. It is initialized and used via:
```
void Curl_bufcp_init(struct bufc_pool *pool, size_t chunk_size, size_t spare_max);
@ -140,9 +157,15 @@ void Curl_bufcp_init(struct bufc_pool *pool, size_t chunk_size, size_t spare_max
void Curl_bufq_initp(struct bufq *q, struct bufc_pool *pool, size_t max_chunks, int opts);
```
The pool gets the size and the mount of spares to keep. The `bufq` gets the pool and the `max_chunks`. It no longer needs to know the chunk sizes, as those are managed by the pool.
The pool gets the size and the mount of spares to keep. The `bufq` gets the
pool and the `max_chunks`. It no longer needs to know the chunk sizes, as
those are managed by the pool.
A pool can be shared between many `bufq`s, as long as all of them operate in the same thread. In curl that would be true for all transfers using the same multi handle. The advantages of a pool are:
A pool can be shared between many `bufq`s, as long as all of them operate in
the same thread. In curl that would be true for all transfers using the same
multi handle. The advantages of a pool are:
* when all `bufq`s are empty, only memory for `max_spare` chunks in the pool is used. Empty `bufq`s will hold no memory.
* the latest spare chunk is the first to be handed out again, no matter which `bufq` needs it. This keeps the footprint of "recently used" memory smaller.
* when all `bufq`s are empty, only memory for `max_spare` chunks in the pool
is used. Empty `bufq`s holds no memory.
* the latest spare chunk is the first to be handed out again, no matter which
`bufq` needs it. This keeps the footprint of "recently used" memory smaller.

View File

@ -44,8 +44,7 @@ void Curl_bufref_set(struct bufref *br, const void *buffer, size_t length,
Releases the previously referenced buffer, then assigns the new `buffer` to
the structure, associated with its `destructor` function. The latter can be
specified as `NULL`: this will be the case when the referenced buffer is
static.
specified as `NULL`: this is the case when the referenced buffer is static.
if `buffer` is NULL, `length` must be zero.

View File

@ -21,8 +21,8 @@ security vulnerabilities. The amount of money that is rewarded depends on how
serious the flaw is determined to be.
Since 2021, the Bug Bounty is managed in association with the Internet Bug
Bounty and they will set the reward amounts. If it would turn out that they
set amounts that are way lower than we can accept, the curl project intends to
Bounty and they set the reward amounts. If it would turn out that they set
amounts that are way lower than we can accept, the curl project intends to
"top up" rewards.
In 2022, typical "Medium" rated vulnerabilities have been rewarded 2,400 USD
@ -40,7 +40,7 @@ Vulnerabilities in features that are off by default and documented as
experimental are not eligible for a reward.
The vulnerability has to be fixed and publicly announced (by the curl project)
before a bug bounty will be considered.
before a bug bounty is considered.
Once the vulnerability has been published by curl, the researcher can request
their bounty from the [Internet Bug Bounty](https://hackerone.com/ibb).
@ -63,9 +63,9 @@ bounty or not.
## How are vulnerabilities graded?
The grading of each reported vulnerability that makes a reward claim will be
performed by the curl security team. The grading will be based on the CVSS
(Common Vulnerability Scoring System) 3.0.
The grading of each reported vulnerability that makes a reward claim is
performed by the curl security team. The grading is based on the CVSS (Common
Vulnerability Scoring System) 3.0.
## How are reward amounts determined?

View File

@ -3,7 +3,7 @@
## There are still bugs
Curl and libcurl keep being developed. Adding features and changing code
means that bugs will sneak in, no matter how hard we try to keep them out.
means that bugs sneak in, no matter how hard we try to keep them out.
Of course there are lots of bugs left. Not to mention misfeatures.
@ -34,16 +34,16 @@
HackerOne](https://hackerone.com/curl).
This ensures that the report reaches the curl security team so that they
first can deal with the report away from the public to minimize the harm
and impact it will have on existing users out there who might be using the
vulnerable versions.
first can deal with the report away from the public to minimize the harm and
impact it has on existing users out there who might be using the vulnerable
versions.
The curl project's process for handling security related issues is
[documented separately](https://curl.se/dev/secprocess.html).
## What to report
When reporting a bug, you should include all information that will help us
When reporting a bug, you should include all information to help us
understand what is wrong, what you expected to happen and how to repeat the
bad behavior. You therefore need to tell us:
@ -58,8 +58,8 @@
and anything and everything else you think matters. Tell us what you expected
to happen, tell use what did happen, tell us how you could make it work
another way. Dig around, try out, test. Then include all the tiny bits and
pieces in your report. You will benefit from this yourself, as it will enable
us to help you quicker and more accurately.
pieces in your report. You benefit from this yourself, as it enables us to
help you quicker and more accurately.
Since curl deals with networks, it often helps us if you include a protocol
debug dump with your bug report. The output you get by using the `-v` or
@ -84,15 +84,15 @@
SCP, the libssh2 version is relevant etc.
Showing us a real source code example repeating your problem is the best way
to get our attention and it will greatly increase our chances to understand
your problem and to work on a fix (if we agree it truly is a problem).
to get our attention and it greatly increases our chances to understand your
problem and to work on a fix (if we agree it truly is a problem).
Lots of problems that appear to be libcurl problems are actually just abuses
of the libcurl API or other malfunctions in your applications. It is advised
that you run your problematic program using a memory debug tool like valgrind
or similar before you post memory-related or "crashing" problems to us.
## Who will fix the problems
## Who fixes the problems
If the problems or bugs you describe are considered to be bugs, we want to
have the problems fixed.
@ -102,11 +102,11 @@
it out of an ambition to keep curl and libcurl excellent products and out of
pride.
Please do not assume that you can just lump over something to us and it will
then magically be fixed after some given time. Most often we need feedback
and help to understand what you have experienced and how to repeat a
problem. Then we may only be able to assist YOU to debug the problem and to
track down the proper fix.
Please do not assume that you can just lump over something to us and it then
magically gets fixed after some given time. Most often we need feedback and
help to understand what you have experienced and how to repeat a problem.
Then we may only be able to assist YOU to debug the problem and to track down
the proper fix.
We get reports from many people every month and each report can take a
considerable amount of time to really go to the bottom with.
@ -119,23 +119,23 @@
Run the program until it cores.
Run your debugger on the core file, like `<debugger> curl
core`. `<debugger>` should be replaced with the name of your debugger, in
most cases that will be `gdb`, but `dbx` and others also occur.
Run your debugger on the core file, like `<debugger> curl core`. `<debugger>`
should be replaced with the name of your debugger, in most cases that is
`gdb`, but `dbx` and others also occur.
When the debugger has finished loading the core file and presents you a
prompt, enter `where` (without quotes) and press return.
The list that is presented is the stack trace. If everything worked, it is
supposed to contain the chain of functions that were called when curl
crashed. Include the stack trace with your detailed bug report, it will help a
crashed. Include the stack trace with your detailed bug report, it helps a
lot.
## Bugs in libcurl bindings
There will of course pop up bugs in libcurl bindings. You should then
primarily approach the team that works on that particular binding and see
what you can do to help them fix the problem.
There are of course bugs in libcurl bindings. You should then primarily
approach the team that works on that particular binding and see what you can
do to help them fix the problem.
If you suspect that the problem exists in the underlying libcurl, then please
convert your program over to plain C and follow the steps outlined above.
@ -181,13 +181,13 @@
maybe they are off in the woods hunting. Have patience. Allow at least a few
days before expecting someone to have responded.
In the issue tracker, you can expect that some labels will be set on the issue
to help categorize it.
In the issue tracker, you can expect that some labels are set on the issue to
help categorize it.
## First response
If your issue/bug report was not perfect at once (and few are), chances are
that someone will ask follow-up questions. Which version did you use? Which
that someone asks follow-up questions. Which version did you use? Which
options did you use? How often does the problem occur? How can we reproduce
this problem? Which protocols does it involve? Or perhaps much more specific
and deep diving questions. It all depends on your specific issue.
@ -210,8 +210,8 @@
for discussing possible ways to move forward with the task, we take that as a
strong suggestion that the bug is unimportant.
Unimportant issues will be closed as inactive sooner or later as they cannot
be fixed. The inactivity period (waiting for responses) should not be shorter
Unimportant issues are closed as inactive sooner or later as they cannot be
fixed. The inactivity period (waiting for responses) should not be shorter
than two weeks but may extend months.
## Lack of time/interest
@ -240,9 +240,8 @@
Issues that are filed or reported that are not really bugs but more missing
features or ideas for future improvements and so on are marked as
'enhancement' or 'feature-request' and will be added to the `TODO` document
and the issues are closed. We do not keep TODO items open in the issue
tracker.
*enhancement* or *feature-request* and get added to the `TODO` document and
the issues are closed. We do not keep TODO items open in the issue tracker.
The `TODO` document is full of ideas and suggestions of what we can add or
fix one day. You are always encouraged and free to grab one of those items and
@ -255,11 +254,11 @@
## Closing off stalled bugs
The [issue and pull request trackers](https://github.com/curl/curl) only
hold "active" entries open (using a non-precise definition of what active
actually is, but they are at least not completely dead). Those that are
abandoned or in other ways dormant will be closed and sometimes added to
`TODO` and `KNOWN_BUGS` instead.
The [issue and pull request trackers](https://github.com/curl/curl) only hold
"active" entries open (using a non-precise definition of what active actually
is, but they are at least not completely dead). Those that are abandoned or
in other ways dormant are closed and sometimes added to `TODO` and
`KNOWN_BUGS` instead.
This way, we only have "active" issues open on GitHub. Irrelevant issues and
pull requests will not distract developers or casual visitors.
pull requests do not distract developers or casual visitors.

View File

@ -73,7 +73,7 @@ warnings are:
- `FOPENMODE`: `fopen()` needs a macro for the mode string, use it
- `INDENTATION`: detected a wrong start column for code. Note that this
warning only checks some specific places and will certainly miss many bad
warning only checks some specific places and can certainly miss many bad
indentations.
- `LONGLINE`: A line is longer than 79 columns.
@ -158,21 +158,21 @@ Example
/* !checksrc! disable LONGLINE all */
This will ignore the warning for overly long lines until it is re-enabled with:
This ignores the warning for overly long lines until it is re-enabled with:
/* !checksrc! enable LONGLINE */
If the enabling is not performed before the end of the file, it will be enabled
automatically for the next file.
If the enabling is not performed before the end of the file, it is enabled
again automatically for the next file.
You can also opt to ignore just N violations so that if you have a single long
line you just cannot shorten and is agreed to be fine anyway:
/* !checksrc! disable LONGLINE 1 */
... and the warning for long lines will be enabled again automatically after
it has ignored that single warning. The number `1` can of course be changed to
any other integer number. It can be used to make sure only the exact intended
... and the warning for long lines is enabled again automatically after it has
ignored that single warning. The number `1` can of course be changed to any
other integer number. It can be used to make sure only the exact intended
instances are ignored and nothing extra.
### Directory wide ignore patterns

View File

@ -290,9 +290,9 @@ next section.
There is also the case that the selected algorithm is not supported by the
protocol or does not match the ciphers offered by the server during the SSL
negotiation. In this case curl will return error
negotiation. In this case curl returns error
`CURLE_SSL_CONNECT_ERROR (35) SEC_E_ALGORITHM_MISMATCH`
and the request will fail.
and the request fails.
`CALG_MD2`,
`CALG_MD4`,
@ -353,7 +353,7 @@ are running an outdated OS you might still be supporting weak ciphers.
You can set TLS 1.3 ciphers for Schannel by using `CURLOPT_TLS13_CIPHERS` or
`--tls13-ciphers` with the names below.
If TLS 1.3 cipher suites are set then libcurl will add or restrict Schannel TLS
If TLS 1.3 cipher suites are set then libcurl adds or restricts Schannel TLS
1.3 algorithms automatically. Essentially, libcurl is emulating support for
individual TLS 1.3 cipher suites since Schannel does not support it directly.

View File

@ -82,13 +82,27 @@ With these writers always in place, libcurl's protocol handlers automatically ha
## Enhanced Use
HTTP is the protocol in curl that makes use of the client writer chain by adding writers to it. When the `libcurl` application set `CURLOPT_ACCEPT_ENCODING` (as `curl` does with `--compressed`), the server is offered an `Accept-Encoding` header with the algorithms supported. The server then may choose to send the response body compressed. For example using `gzip` or `brotli` or even both.
HTTP is the protocol in curl that makes use of the client writer chain by
adding writers to it. When the `libcurl` application set
`CURLOPT_ACCEPT_ENCODING` (as `curl` does with `--compressed`), the server is
offered an `Accept-Encoding` header with the algorithms supported. The server
then may choose to send the response body compressed. For example using `gzip`
or `brotli` or even both.
In the server's response, there then will be a `Content-Encoding` header listing the encoding applied. If supported by `libcurl` it will then decompress the content before writing it out to the client. How does it do that?
In the server's response, if there is a `Content-Encoding` header listing the
encoding applied. If supported by `libcurl` it then decompresses the content
before writing it out to the client. How does it do that?
The HTTP protocol will add client writers in phase `CURL_CW_CONTENT_DECODE` on seeing such a header. For each encoding listed, it will add the corresponding writer. The response from the server is then passed through `Curl_client_write()` to the writers that decode it. If several encodings had been applied the writer chain decodes them in the proper order.
The HTTP protocol adds client writers in phase `CURL_CW_CONTENT_DECODE` on
seeing such a header. For each encoding listed, it adds the corresponding
writer. The response from the server is then passed through
`Curl_client_write()` to the writers that decode it. If several encodings had
been applied the writer chain decodes them in the proper order.
When the server provides a `Content-Length` header, that value applies to the *compressed* content. So length checks on the response bytes must happen *before* it gets decoded. That is why this check happens in phase `CURL_CW_PROTOCOL` which always is ordered before writers in phase `CURL_CW_CONTENT_DECODE`.
When the server provides a `Content-Length` header, that value applies to the
*compressed* content. Length checks on the response bytes must happen *before*
it gets decoded. That is why this check happens in phase `CURL_CW_PROTOCOL`
which always is ordered before writers in phase `CURL_CW_CONTENT_DECODE`.
What else?

View File

@ -19,7 +19,7 @@ particularly unusual rules in our set of rules.
We also work hard on writing code that are warning-free on all the major
platforms and in general on as many platforms as possible. Code that obviously
will cause warnings will not be accepted as-is.
causes warnings is not accepted as-is.
## Naming
@ -218,7 +218,7 @@ int size = sizeof(int);
Some statements cannot be completed on a single line because the line would be
too long, the statement too hard to read, or due to other style guidelines
above. In such a case the statement will span multiple lines.
above. In such a case the statement spans multiple lines.
If a continuation line is part of an expression or sub-expression then you
should align on the appropriate column so that it is easy to tell what part of

View File

@ -19,8 +19,8 @@ by a `socket` and a SSL instance en- and decrypt over that socket. You write
your request to the SSL instance, which encrypts and writes that data to the
socket, which then sends the bytes over the network.
With connection filters, curl's internal setup will look something like this
(cf for connection filter):
With connection filters, curl's internal setup looks something like this (cf
for connection filter):
```
Curl_easy *data connectdata *conn cf-ssl cf-socket
@ -33,9 +33,15 @@ Curl_easy *data connectdata *conn cf-ssl cf-socket
---> conn->filter->write(conn->filter, data, buffer)
```
While connection filters all do different things, they look the same from the "outside". The code in `data` and `conn` does not really know **which** filters are installed. `conn` just writes into the first filter, whatever that is.
While connection filters all do different things, they look the same from the
"outside". The code in `data` and `conn` does not really know **which**
filters are installed. `conn` just writes into the first filter, whatever that
is.
Same is true for filters. Each filter has a pointer to the `next` filter. When SSL has encrypted the data, it does not write to a socket, it writes to the next filter. If that is indeed a socket, or a file, or an HTTP/2 connection is of no concern to the SSL filter.
Same is true for filters. Each filter has a pointer to the `next` filter. When
SSL has encrypted the data, it does not write to a socket, it writes to the
next filter. If that is indeed a socket, or a file, or an HTTP/2 connection is
of no concern to the SSL filter.
This allows stacking, as in:
@ -55,7 +61,12 @@ Via http proxy tunnel via SOCKS proxy:
### Connecting/Closing
Before `Curl_easy` can send the request, the connection needs to be established. This means that all connection filters have done, whatever they need to do: waiting for the socket to be connected, doing the TLS handshake, performing the HTTP tunnel request, etc. This has to be done in reverse order: the last filter has to do its connect first, then the one above can start, etc.
Before `Curl_easy` can send the request, the connection needs to be
established. This means that all connection filters have done, whatever they
need to do: waiting for the socket to be connected, doing the TLS handshake,
performing the HTTP tunnel request, etc. This has to be done in reverse order:
the last filter has to do its connect first, then the one above can start,
etc.
Each filter does in principle the following:
@ -82,12 +93,14 @@ myfilter_cf_connect(struct Curl_cfilter *cf,
}
```
Closing a connection then works similar. The `conn` tells the first filter to close. Contrary to connecting,
the filter does its own things first, before telling the next filter to close.
Closing a connection then works similar. The `conn` tells the first filter to
close. Contrary to connecting, the filter does its own things first, before
telling the next filter to close.
### Efficiency
There are two things curl is concerned about: efficient memory use and fast transfers.
There are two things curl is concerned about: efficient memory use and fast
transfers.
The memory footprint of a filter is relatively small:
@ -101,13 +114,24 @@ struct Curl_cfilter {
BIT(connected); /* != 0 iff this filter is connected */
};
```
The filter type `cft` is a singleton, one static struct for each type of filter. The `ctx` is where a filter will hold its specific data. That varies by filter type. An http-proxy filter will keep the ongoing state of the CONNECT here, but free it after its has been established. The SSL filter will keep the `SSL*` (if OpenSSL is used) here until the connection is closed. So, this varies.
`conn` is a reference to the connection this filter belongs to, so nothing extra besides the pointer itself.
The filter type `cft` is a singleton, one static struct for each type of
filter. The `ctx` is where a filter holds its specific data. That varies by
filter type. An http-proxy filter keeps the ongoing state of the CONNECT here,
free it after its has been established. The SSL filter keeps the `SSL*` (if
OpenSSL is used) here until the connection is closed. So, this varies.
Several things, that before were kept in `struct connectdata`, will now go into the `filter->ctx` *when needed*. So, the memory footprint for connections that do *not* use an http proxy, or socks, or https will be lower.
`conn` is a reference to the connection this filter belongs to, so nothing
extra besides the pointer itself.
As to transfer efficiency, writing and reading through a filter comes at near zero cost *if the filter does not transform the data*. An http proxy or socks filter, once it is connected, will just pass the calls through. Those filters implementations will look like this:
Several things, that before were kept in `struct connectdata`, now goes into
the `filter->ctx` *when needed*. So, the memory footprint for connections that
do *not* use an http proxy, or socks, or https is lower.
As to transfer efficiency, writing and reading through a filter comes at near
zero cost *if the filter does not transform the data*. An http proxy or socks
filter, once it is connected, just passes the calls through. Those filters
implementations look like this:
```
ssize_t Curl_cf_def_send(struct Curl_cfilter *cf, struct Curl_easy *data,
@ -120,37 +144,58 @@ The `recv` implementation is equivalent.
## Filter Types
The currently existing filter types (curl 8.5.0) are:
The currently existing filter types (curl 8.5.0) are:
* `TCP`, `UDP`, `UNIX`: filters that operate on a socket, providing raw I/O.
* `SOCKET-ACCEPT`: special TCP socket that has a socket that has been `accept()`ed in a `listen()`
* `SSL`: filter that applies TLS en-/decryption and handshake. Manages the underlying TLS backend implementation.
* `SOCKET-ACCEPT`: special TCP socket that has a socket that has been
`accept()`ed in a `listen()`
* `SSL`: filter that applies TLS en-/decryption and handshake. Manages the
underlying TLS backend implementation.
* `HTTP-PROXY`, `H1-PROXY`, `H2-PROXY`: the first manages the connection to an
HTTP proxy server and uses the other depending on which ALPN protocol has
been negotiated.
* `SOCKS-PROXY`: filter for the various SOCKS proxy protocol variations
* `HAPROXY`: filter for the protocol of the same name, providing client IP information to a server.
* `HTTP/2`: filter for handling multiplexed transfers over an HTTP/2 connection
* `HTTP/3`: filter for handling multiplexed transfers over an HTTP/3+QUIC connection
* `HAPPY-EYEBALLS`: meta filter that implements IPv4/IPv6 "happy eyeballing". It creates up to 2 sub-filters that race each other for a connection.
* `SETUP`: meta filter that manages the creation of sub-filter chains for a specific transport (e.g. TCP or QUIC).
* `HTTPS-CONNECT`: meta filter that races a TCP+TLS and a QUIC connection against each other to determine if HTTP/1.1, HTTP/2 or HTTP/3 shall be used for a transfer.
* `HAPROXY`: filter for the protocol of the same name, providing client IP
information to a server.
* `HTTP/2`: filter for handling multiplexed transfers over an HTTP/2
connection
* `HTTP/3`: filter for handling multiplexed transfers over an HTTP/3+QUIC
connection
* `HAPPY-EYEBALLS`: meta filter that implements IPv4/IPv6 "happy eyeballing".
It creates up to 2 sub-filters that race each other for a connection.
* `SETUP`: meta filter that manages the creation of sub-filter chains for a
specific transport (e.g. TCP or QUIC).
* `HTTPS-CONNECT`: meta filter that races a TCP+TLS and a QUIC connection
against each other to determine if HTTP/1.1, HTTP/2 or HTTP/3 shall be used
for a transfer.
Meta filters are combining other filters for a specific purpose, mostly during connection establishment. Other filters like `TCP`, `UDP` and `UNIX` are only to be found at the end of filter chains. SSL filters provide encryption, of course. Protocol filters change the bytes sent and received.
Meta filters are combining other filters for a specific purpose, mostly during
connection establishment. Other filters like `TCP`, `UDP` and `UNIX` are only
to be found at the end of filter chains. SSL filters provide encryption, of
course. Protocol filters change the bytes sent and received.
## Filter Flags
Filter types carry flags that inform what they do. These are (for now):
* `CF_TYPE_IP_CONNECT`: this filter type talks directly to a server. This does not have to be the server the transfer wants to talk to. For example when a proxy server is used.
* `CF_TYPE_IP_CONNECT`: this filter type talks directly to a server. This does
not have to be the server the transfer wants to talk to. For example when a
proxy server is used.
* `CF_TYPE_SSL`: this filter type provides encryption.
* `CF_TYPE_MULTIPLEX`: this filter type can manage multiple transfers in parallel.
Filter types can combine these flags. For example, the HTTP/3 filter types have `CF_TYPE_IP_CONNECT`, `CF_TYPE_SSL` and `CF_TYPE_MULTIPLEX` set.
Filter types can combine these flags. For example, the HTTP/3 filter types
have `CF_TYPE_IP_CONNECT`, `CF_TYPE_SSL` and `CF_TYPE_MULTIPLEX` set.
Flags are useful to extrapolate properties of a connection. To check if a connection is encrypted, libcurl inspect the filter chain in place, top down, for `CF_TYPE_SSL`. If it finds `CF_TYPE_IP_CONNECT` before any `CF_TYPE_SSL`, the connection is not encrypted.
Flags are useful to extrapolate properties of a connection. To check if a
connection is encrypted, libcurl inspect the filter chain in place, top down,
for `CF_TYPE_SSL`. If it finds `CF_TYPE_IP_CONNECT` before any `CF_TYPE_SSL`,
the connection is not encrypted.
For example, `conn1` is for a `http:` request using a tunnel through a HTTP/2 `https:` proxy. `conn2` is a `https:` HTTP/2 connection to the same proxy. `conn3` uses HTTP/3 without proxy. The filter chains would look like this (simplified):
For example, `conn1` is for a `http:` request using a tunnel through an HTTP/2
`https:` proxy. `conn2` is a `https:` HTTP/2 connection to the same proxy.
`conn3` uses HTTP/3 without proxy. The filter chains would look like this
(simplified):
```
conn1 --> `HTTP-PROXY` --> `H2-PROXY` --> `SSL` --> `TCP`
@ -163,13 +208,19 @@ conn3 --> `HTTP/3`
flags: `SSL|IP_CONNECT`
```
Inspecting the filter chains, `conn1` is seen as unencrypted, since it contains an `IP_CONNECT` filter before any `SSL`. `conn2` is clearly encrypted as an `SSL` flagged filter is seen first. `conn3` is also encrypted as the `SSL` flag is checked before the presence of `IP_CONNECT`.
Inspecting the filter chains, `conn1` is seen as unencrypted, since it
contains an `IP_CONNECT` filter before any `SSL`. `conn2` is clearly encrypted
as an `SSL` flagged filter is seen first. `conn3` is also encrypted as the
`SSL` flag is checked before the presence of `IP_CONNECT`.
Similar checks can determine if a connection is multiplexed or not.
## Filter Tracing
Filters may make use of special trace macros like `CURL_TRC_CF(data, cf, msg, ...)`. With `data` being the transfer and `cf` being the filter instance. These traces are normally not active and their execution is guarded so that they are cheap to ignore.
Filters may make use of special trace macros like `CURL_TRC_CF(data, cf, msg,
...)`. With `data` being the transfer and `cf` being the filter instance.
These traces are normally not active and their execution is guarded so that
they are cheap to ignore.
Users of `curl` may activate them by adding the name of the filter type to the
`--trace-config` argument. For example, in order to get more detailed tracing
@ -178,11 +229,19 @@ of an HTTP/2 request, invoke curl with:
```
> curl -v --trace-config ids,time,http/2 https://curl.se
```
Which will give you trace output with time information, transfer+connection ids and details from the `HTTP/2` filter. Filter type names in the trace config are case insensitive. You may use `all` to enable tracing for all filter types. When using `libcurl` you may call `curl_global_trace(config_string)` at the start of your application to enable filter details.
Which gives you trace output with time information, transfer+connection ids
and details from the `HTTP/2` filter. Filter type names in the trace config
are case insensitive. You may use `all` to enable tracing for all filter
types. When using `libcurl` you may call `curl_global_trace(config_string)` at
the start of your application to enable filter details.
## Meta Filters
Meta filters is a catch-all name for filter types that do not change the transfer data in any way but provide other important services to curl. In general, it is possible to do all sorts of silly things with them. One of the commonly used, important things is "eyeballing".
Meta filters is a catch-all name for filter types that do not change the
transfer data in any way but provide other important services to curl. In
general, it is possible to do all sorts of silly things with them. One of the
commonly used, important things is "eyeballing".
The `HAPPY-EYEBALLS` filter is involved in the connect phase. Its job is to
try the various IPv4 and IPv6 addresses that are known for a server. If only
@ -190,7 +249,9 @@ one address family is known (or configured), it tries the addresses one after
the other with timeouts calculated from the amount of addresses and the
overall connect timeout.
When more than one address family is to be tried, it splits the address list into IPv4 and IPv6 and makes parallel attempts. The connection filter chain will look like this:
When more than one address family is to be tried, it splits the address list
into IPv4 and IPv6 and makes parallel attempts. The connection filter chain
looks like this:
```
* create connection for http://curl.se

View File

@ -35,14 +35,14 @@ must use "GPL compatible" licenses (as we want to allow users to use libcurl
properly in GPL licensed environments).
When changing existing source code, you do not alter the copyright of the
original file(s). The copyright will still be owned by the original creator(s)
or those who have been assigned copyright by the original author(s).
original file(s). The copyright is still owned by the original creator(s) or
those who have been assigned copyright by the original author(s).
By submitting a patch to the curl project, you are assumed to have the right
to the code and to be allowed by your employer or whatever to hand over that
patch/code to us. We will credit you for your changes as far as possible, to
give credit but also to keep a trace back to who made what changes. Please
always provide us with your full real name when contributing,
patch/code to us. We credit you for your changes as far as possible, to give
credit but also to keep a trace back to who made what changes. Please always
provide us with your full real name when contributing,
## What To Read
@ -50,10 +50,10 @@ Source code, the man pages, the [INTERNALS
document](https://curl.se/dev/internals.html),
[TODO](https://curl.se/docs/todo.html),
[KNOWN_BUGS](https://curl.se/docs/knownbugs.html) and the [most recent
changes](https://curl.se/dev/sourceactivity.html) in git. Just lurking on
the [curl-library mailing
list](https://curl.se/mail/list.cgi?list=curl-library) will give you a
lot of insights on what's going on right now. Asking there is a good idea too.
changes](https://curl.se/dev/sourceactivity.html) in git. Just lurking on the
[curl-library mailing list](https://curl.se/mail/list.cgi?list=curl-library)
gives you a lot of insights on what's going on right now. Asking there is a
good idea too.
## Write a good patch
@ -113,10 +113,10 @@ generated from the nroff/ASCII versions.
Since the introduction of the test suite, we can quickly verify that the main
features are working as they are supposed to. To maintain this situation and
improve it, all new features and functions that are added need to be tested
in the test suite. Every feature that is added should get at least one valid
test case that verifies that it works as documented. If every submitter also
posts a few test cases, it will not end up as a heavy burden on a single person.
improve it, all new features and functions that are added need to be tested in
the test suite. Every feature that is added should get at least one valid test
case that verifies that it works as documented. If every submitter also posts
a few test cases, it does not end up a heavy burden on a single person.
If you do not have test cases or perhaps you have done something that is hard
to write tests for, do explain exactly how you have otherwise tested and
@ -131,19 +131,19 @@ GitHub](https://github.com/curl/curl/pulls), but you can also send your plain
patch to [the curl-library mailing
list](https://curl.se/mail/list.cgi?list=curl-library).
If you opt to post a patch on the mailing list, chances are someone will
convert it into a pull request for you, to have the CI jobs verify it proper
before it can be merged. Be prepared that some feedback on the proposed change
might then come on GitHub.
If you opt to post a patch on the mailing list, chances are someone converts
it into a pull request for you, to have the CI jobs verify it proper before it
can be merged. Be prepared that some feedback on the proposed change might
then come on GitHub.
Your change will be reviewed and discussed and you will be expected to correct
flaws pointed out and update accordingly, or the change risks stalling and
Your changes be reviewed and discussed and you are expected to correct flaws
pointed out and update accordingly, or the change risks stalling and
eventually just getting deleted without action. As a submitter of a change,
you are the owner of that change until it has been merged.
Respond on the list or on GitHub about the change and answer questions and/or
fix nits/flaws. This is important. We will take lack of replies as a sign that
you are not anxious to get your patch accepted and we tend to simply drop such
fix nits/flaws. This is important. We take lack of replies as a sign that you
are not anxious to get your patch accepted and we tend to simply drop such
changes.
### About pull requests
@ -157,7 +157,7 @@ git commit that is easy to merge and they are easy to track and not that easy
to lose in the flood of many emails, like they sometimes do on the mailing
lists.
Every pull request submitted will automatically be tested in several different
Every pull request submitted is automatically tested in several different
ways. [See the CI document for more
information](https://github.com/curl/curl/blob/master/tests/CI.md).
@ -219,10 +219,10 @@ A short guide to how to write git commit messages in the curl project.
has already been closed]
[Ref: URL to more information about the commit; use Bug: instead for
a reference to a bug on another bug tracker]
[Fixes #1234 - if this closes a GitHub issue; GitHub will actually
close the issue once this commit is merged]
[Closes #1234 - if this closes a GitHub PR; GitHub will actually
close the PR once this commit is merged]
[Fixes #1234 - if this closes a GitHub issue; GitHub closes the issue once
this commit is merged]
[Closes #1234 - if this closes a GitHub PR; GitHub closes the PR once this
commit is merged]
---- stop ----
The first line is a succinct description of the change:
@ -248,10 +248,10 @@ a previous commit; saying `{userid} on github` is OK.
### Write Access to git Repository
If you are a frequent contributor, you may be given push access to the git
repository and then you will be able to push your changes straight into the git
repository and then you are able to push your changes straight into the git
repo instead of sending changes as pull requests or by mail as patches.
Just ask if this is what you would want. You will be required to have posted
Just ask if this is what you would want. You are required to have posted
several high quality patches first, before you can be granted push access.
### How To Make a Patch with git
@ -302,9 +302,9 @@ all kinds of Unixes and Windows.
## Update copyright and license information
There is a CI job called **REUSE compliance / check** that will run on every
pull request and commit to verify that the *REUSE state* of all files are
still fine.
There is a CI job called **REUSE compliance / check** that runs on every pull
request and commit to verify that the *REUSE state* of all files are still
fine.
This means that all files need to have their license and copyright information
clearly stated. Ideally by having the standard curl source code header, with

View File

@ -106,12 +106,12 @@ Write italics like:
This is *italics*.
Due to how man pages do not support backticks especially formatted, such
occurrences in the source will instead just use italics in the generated
occurrences in the source are instead just using italics in the generated
output:
This `word` appears in italics.
When generating the nroff output, the tooling will remove superfluous newlines,
When generating the nroff output, the tooling removes superfluous newlines,
meaning they can be used freely in the source file to make the text more
readable.
@ -121,10 +121,10 @@ occurrences of `<` or `>` need to be escaped by a leading backslash.
## symbols
All mentioned curl symbols that have their own man pages, like
`curl_easy_perform(3)` will automatically be rendered using italics in the
output without having to enclose it with asterisks. This helps ensuring that
they get converted to links properly later in the HTML version on the website,
as converted with `roffit`. This makes the curldown text easier to read even
when mentioning many curl symbols.
`curl_easy_perform(3)` are automatically rendered using italics in the output
without having to enclose it with asterisks. This helps ensuring that they get
converted to links properly later in the HTML version on the website, as
converted with `roffit`. This makes the curldown text easier to read even when
mentioning many curl symbols.
This auto-linking works for patterns matching `(lib|)curl[^ ]*(3)`.

View File

@ -19,7 +19,7 @@ Due to a mistake, the `NTLM_WB` functionality is missing in builds since 8.4.0
(October 2023). It needs to be manually patched to work. See [PR
12479](https://github.com/curl/curl/pull/12479).
curl will remove the support for NTLM_WB auth in April 2024.
curl removes the support for NTLM_WB auth in April 2024.
## space-separated `NOPROXY` patterns
@ -38,7 +38,7 @@ variable but do not consider a space to be a valid separator. Using spaces for
separator is probably less portable and might cause more friction than commas
do. Users should use commas for this for greater portability.
curl will remove the support for space-separated names in July 2024.
curl removes the support for space-separated names in July 2024.
## past removals

View File

@ -3,7 +3,7 @@
This is the internal module for creating and handling "dynamic buffers". This
means buffers that can be appended to, dynamically and grow to adapt.
There will always be a terminating zero put at the end of the dynamic buffer.
There is always a terminating zero put at the end of the dynamic buffer.
The `struct dynbuf` is used to hold data for each instance of a dynamic
buffer. The members of that struct **MUST NOT** be accessed or modified
@ -17,8 +17,8 @@ void Curl_dyn_init(struct dynbuf *s, size_t toobig);
This initializes a struct to use for dynbuf and it cannot fail. The `toobig`
value **must** be set to the maximum size we allow this buffer instance to
grow to. The functions below will return `CURLE_OUT_OF_MEMORY` when hitting
this limit.
grow to. The functions below return `CURLE_OUT_OF_MEMORY` when hitting this
limit.
## `Curl_dyn_free`

View File

@ -28,7 +28,7 @@ big and we never release just a patch. There is only "release".
- Is there a security advisory rated high or critical?
- Is there a data corruption bug?
- Did the bug cause an API/ABI breakage?
- Will the problem annoy a significant share of the user population?
- Does the problem annoy a significant share of the user population?
If the answer is yes to one or more of the above, an early release might be
warranted.

View File

@ -8,8 +8,8 @@ Experimental support in curl means:
1. Experimental features are provided to allow users to try them out and
provide feedback on functionality and API etc before they ship and get
"carved in stone".
2. You must enable the feature when invoking configure as otherwise curl will
not be built with the feature present.
2. You must enable the feature when invoking configure as otherwise curl is
not built with the feature present.
3. We strongly advise against using this feature in production.
4. **We reserve the right to change behavior** of the feature without sticking
to our API/ABI rules as we do for regular features, as long as it is marked

View File

@ -10,9 +10,8 @@ BDFL (Benevolent Dictator For Life) style project.
This setup has been used due to convenience and the fact that it has worked
fine this far. It is not because someone thinks of it as a superior project
leadership model. It will also only continue working as long as Daniel manages
to listen in to what the project and the general user population wants and
expects from us.
leadership model. It also only works as long as Daniel manages to listen in to
what the project and the general user population wants and expects from us.
## Legal entity
@ -29,13 +28,13 @@ that wrote those parts of the code.
The curl project is not a democracy, but everyone is entitled to state their
opinion and may argue for their sake within the community.
All and any changes that have been done or will be done are eligible to bring
up for discussion, to object to or to praise. Ideally, we find consensus for
the appropriate way forward in any given situation or challenge.
All and any changes that have been done or are done are eligible to bring up
for discussion, to object to or to praise. Ideally, we find consensus for the
appropriate way forward in any given situation or challenge.
If there is no obvious consensus, a maintainer who's knowledgeable in the
specific area will take an "executive" decision that they think is the right
for the project.
specific area takes an "executive" decision that they think is the right for
the project.
## Donations
@ -81,17 +80,17 @@ curl source code repository. Committers are recorded as `Author` in git.
A maintainer in the curl project is an individual who has been given
permissions to push commits to one of the git repositories.
Maintainers are free to push commits to the repositories at their own will.
Maintainers are free to push commits to the repositories at they see fit.
Maintainers are however expected to listen to feedback from users and any
change that is non-trivial in size or nature *should* be brought to the
project as a Pull-Request (PR) to allow others to comment/object before merge.
## Former maintainers
A maintainer who stops being active in the project will at some point get
their push permissions removed. We do this for security reasons but also to
make sure that we always have the list of maintainers as "the team that push
stuff to curl".
A maintainer who stops being active in the project gets their push permissions
removed at some point. We do this for security reasons but also to make sure
that we always have the list of maintainers as "the team that push stuff to
curl".
Getting push permissions removed is not a punishment. Everyone who ever worked
on maintaining curl is considered a hero, for all time hereafter.
@ -100,7 +99,7 @@ on maintaining curl is considered a hero, for all time hereafter.
We have a security team. That is the team of people who are subscribed to the
curl-security mailing list; the receivers of security reports from users and
developers. This list of people will vary over time but should be skilled
developers. This list of people varies over time but they are all skilled
developers familiar with the curl project.
The security team works best when it consists of a small set of active
@ -172,9 +171,8 @@ different individuals and over time.
If you think you can help making the project better by shouldering some
maintaining responsibilities, then please get in touch.
You will be expected to be familiar with the curl project and its ways of
working. You need to have gotten a few quality patches merged as a proof of
this.
You are expected to be familiar with the curl project and its ways of working.
You need to have gotten a few quality patches merged as a proof of this.
### Stop being a maintainer

View File

@ -40,8 +40,8 @@ In the issue tracker we occasionally mark bugs with [help
wanted](https://github.com/curl/curl/labels/help%20wanted), as a sign that the
bug is acknowledged to exist and that there is nobody known to work on this
issue for the moment. Those are bugs that are fine to "grab" and provide a
pull request for. The complexity level of these will of course vary, so pick
one that piques your interest.
pull request for. The complexity level of these of course varies, so pick one
that piques your interest.
## Work on known bugs
@ -77,13 +77,12 @@ brainstorming on specific ways to do the implementation etc.
You can also come up with a completely new thing you think we should do. Or
not do. Or fix. Or add to the project. You then either bring it to the mailing
list first to see if people will shoot down the idea at once, or you bring a
first draft of the idea as a pull request and take the discussion there around
the specific implementation. Either way is fine.
list first to see if people shoot down the idea at once, or you bring a first
draft of the idea as a pull request and take the discussion there around the
specific implementation. Either way is fine.
## CONTRIBUTE
We offer [guidelines](https://curl.se/dev/contribute.html) that are
suitable to be familiar with before you decide to contribute to curl. If
you are used to open source development, you will probably not find many
surprises there.
We offer [guidelines](https://curl.se/dev/contribute.html) that are suitable
to be familiar with before you decide to contribute to curl. If you are used
to open source development, you probably do not find many surprises there.

View File

@ -10,7 +10,7 @@ HTTP Strict-Transport-Security. Added as experimental in curl
## Behavior
libcurl features an in-memory cache for HSTS hosts, so that subsequent
HTTP-only requests to a hostname present in the cache will get internally
HTTP-only requests to a hostname present in the cache gets internally
"redirected" to the HTTPS version.
## `curl_easy_setopt()` options:
@ -22,7 +22,7 @@ HTTP-only requests to a hostname present in the cache will get internally
## curl command line options
- `--hsts [filename]` - enable HSTS, use the file as HSTS cache. If filename
is `""` (no length) then no file will be used, only in-memory cache.
is `""` (no length) then no file is used, only in-memory cache.
## HSTS cache file format

View File

@ -9,7 +9,7 @@
Cookies are either "session cookies" which typically are forgotten when the
session is over which is often translated to equal when browser quits, or
the cookies are not session cookies they have expiration dates after which
the client will throw them away.
the client throws them away.
Cookies are set to the client with the Set-Cookie: header and are sent to
servers with the Cookie: header.
@ -30,9 +30,9 @@
implemented by curl.
curl considers `http://localhost` to be a *secure context*, meaning that it
will allow and use cookies marked with the `secure` keyword even when done
over plain HTTP for this host. curl does this to match how popular browsers
work with secure cookies.
allows and uses cookies marked with the `secure` keyword even when done over
plain HTTP for this host. curl does this to match how popular browsers work
with secure cookies.
## Super cookies
@ -65,8 +65,7 @@
TAB. That file is called the cookie jar in curl terminology.
When libcurl saves a cookie jar, it creates a file header of its own in
which there is a URL mention that will link to the web version of this
document.
which there is a URL mention that links to the web version of this document.
## Cookie file format
@ -101,13 +100,13 @@
`-b, --cookie`
tell curl a file to read cookies from and start the cookie engine, or if it
is not a file it will pass on the given string. `-b name=var` works and so
does `-b cookiefile`.
is not a file it passes on the given string. `-b name=var` works and so does
`-b cookiefile`.
`-j, --junk-session-cookies`
when used in combination with -b, it will skip all "session cookies" on load
so as to appear to start a new cookie session.
when used in combination with -b, it skips all "session cookies" on load so
as to appear to start a new cookie session.
`-c, --cookie-jar`
@ -159,7 +158,7 @@
can also set and access cookies.
Since curl and libcurl are plain HTTP clients without any knowledge of or
capability to handle JavaScript, such cookies will not be detected or used.
capability to handle JavaScript, such cookies are not detected or used.
Often, if you want to mimic what a browser does on such websites, you can
record web browser HTTP traffic when using such a site and then repeat the

View File

@ -23,20 +23,20 @@ We require at least version 1.12.0.
Over an http:// URL
-------------------
If `CURLOPT_HTTP_VERSION` is set to `CURL_HTTP_VERSION_2_0`, libcurl will
include an upgrade header in the initial request to the host to allow
upgrading to HTTP/2.
If `CURLOPT_HTTP_VERSION` is set to `CURL_HTTP_VERSION_2_0`, libcurl includes
an upgrade header in the initial request to the host to allow upgrading to
HTTP/2.
Possibly we can later introduce an option that will cause libcurl to fail if
Possibly we can later introduce an option that causes libcurl to fail if it is
not possible to upgrade. Possibly we introduce an option that makes libcurl
use HTTP/2 at once over http://
Over an https:// URL
--------------------
If `CURLOPT_HTTP_VERSION` is set to `CURL_HTTP_VERSION_2_0`, libcurl will use
ALPN to negotiate which protocol to continue with. Possibly introduce an
option that will cause libcurl to fail if not possible to use HTTP/2.
If `CURLOPT_HTTP_VERSION` is set to `CURL_HTTP_VERSION_2_0`, libcurl uses ALPN
to negotiate which protocol to continue with. Possibly introduce an option
that causes libcurl to fail if not possible to use HTTP/2.
`CURL_HTTP_VERSION_2TLS` was added in 7.47.0 as a way to ask libcurl to prefer
HTTP/2 for HTTPS but stick to 1.1 by default for plain old HTTP connections.
@ -54,15 +54,15 @@ term for doing multiple independent transfers over the same physical TCP
connection.
To take advantage of multiplexing, you need to use the multi interface and set
`CURLMOPT_PIPELINING` to `CURLPIPE_MULTIPLEX`. With that bit set, libcurl will
attempt to reuse existing HTTP/2 connections and just add a new stream over
`CURLMOPT_PIPELINING` to `CURLPIPE_MULTIPLEX`. With that bit set, libcurl
attempts to reuse existing HTTP/2 connections and just add a new stream over
that when doing subsequent parallel requests.
While libcurl sets up a connection to an HTTP server there is a period during
which it does not know if it can pipeline or do multiplexing and if you add
new transfers in that period, libcurl will default to start new connections
for those transfers. With the new option `CURLOPT_PIPEWAIT` (added in 7.43.0),
you can ask that a transfer should rather wait and see in case there is a
new transfers in that period, libcurl defaults to starting new connections for
those transfers. With the new option `CURLOPT_PIPEWAIT` (added in 7.43.0), you
can ask that a transfer should rather wait and see in case there is a
connection for the same host in progress that might end up being possible to
multiplex on. It favors keeping the number of connections low to the cost of
slightly longer time to first byte transferred.

View File

@ -25,8 +25,8 @@ HTTP/3 support in curl is considered **EXPERIMENTAL** until further notice
when built to use *quiche* or *msh3*. Only the *ngtcp2* backend is not
experimental.
Further development and tweaking of the HTTP/3 support in curl will happen in
the master branch using pull-requests, just like ordinary changes.
Further development and tweaking of the HTTP/3 support in curl happens in the
master branch using pull-requests, just like ordinary changes.
To fix before we remove the experimental label:
@ -273,11 +273,10 @@ Build msh3:
% cmake --build . --config Release
% cmake --install . --config Release
**Note** - On Windows, Schannel will be used for TLS support by default. If
you with to use (the quictls fork of) OpenSSL, specify the
`-DQUIC_TLS=openssl` option to the generate command above. Also note that
OpenSSL brings with it an additional set of build dependencies not specified
here.
**Note** - On Windows, Schannel is used for TLS support by default. If you
with to use (the quictls fork of) OpenSSL, specify the `-DQUIC_TLS=openssl`
option to the generate command above. Also note that OpenSSL brings with it an
additional set of build dependencies not specified here.
Build curl (in [Visual Studio Command
prompt](../winbuild/README.md#open-a-command-prompt)):
@ -314,10 +313,10 @@ See this [list of public HTTP/3 servers](https://bagder.github.io/HTTP3-test/)
### HTTPS eyeballing
With option `--http3` curl will attempt earlier HTTP versions as well should
the connect attempt via HTTP/3 not succeed "fast enough". This strategy is
similar to IPv4/6 happy eyeballing where the alternate address family is used
in parallel after a short delay.
With option `--http3` curl attempts earlier HTTP versions as well should the
connect attempt via HTTP/3 not succeed "fast enough". This strategy is similar
to IPv4/6 happy eyeballing where the alternate address family is used in
parallel after a short delay.
The IPv4/6 eyeballing has a default of 200ms and you may override that via
`--happy-eyeballs-timeout-ms value`. Since HTTP/3 is still relatively new, we
@ -336,8 +335,8 @@ So, without you specifying anything, the hard timeout is 200ms and the soft is 1
in less than 100ms.
* When QUIC is not supported (or UDP does not work for this network path), no
reply is seen and the HTTP/2 TLS+TCP connection starts 100ms later.
* In the worst case, UDP replies start before 100ms, but drag on. This will
start the TLS+TCP connection after 200ms.
* In the worst case, UDP replies start before 100ms, but drag on. This starts
the TLS+TCP connection after 200ms.
* When the QUIC handshake fails, the TLS+TCP connection is attempted right
away. For example, when the QUIC server presents the wrong certificate.
@ -345,9 +344,9 @@ The whole transfer only fails, when **both** QUIC and TLS+TCP fail to
handshake or time out.
Note that all this happens in addition to IP version happy eyeballing. If the
name resolution for the server gives more than one IP address, curl will try
all those until one succeeds - just as with all other protocols. If those IP
addresses contain both IPv6 and IPv4, those attempts will happen, delayed, in
name resolution for the server gives more than one IP address, curl tries all
those until one succeeds - just as with all other protocols. If those IP
addresses contain both IPv6 and IPv4, those attempts happen, delayed, in
parallel (the actual eyeballing).
## Known Bugs

View File

@ -8,9 +8,8 @@ library as a backend to deal with HTTP.
Hyper support in curl is considered **EXPERIMENTAL** until further notice. It
needs to be explicitly enabled at build-time.
Further development and tweaking of the Hyper backend support in curl will
happen in the master branch using pull-requests, just like ordinary
changes.
Further development and tweaking of the Hyper backend support in curl happens
in the master branch using pull-requests, just like ordinary changes.
## Hyper version

View File

@ -9,11 +9,11 @@
# Building with CMake
This document describes how to configure, build and install curl and libcurl
from source code using the CMake build tool. To build with CMake, you will
of course have to first install CMake. The minimum required version of CMake
is specified in the file `CMakeLists.txt` found in the top of the curl
source tree. Once the correct version of CMake is installed you can follow
the instructions below for the platform you are building on.
from source code using the CMake build tool. To build with CMake, you of
course first have to install CMake. The minimum required version of CMake is
specified in the file `CMakeLists.txt` found in the top of the curl source
tree. Once the correct version of CMake is installed you can follow the
instructions below for the platform you are building on.
CMake builds can be configured either from the command line, or from one of
CMake's GUIs.
@ -50,7 +50,7 @@ that is apart from the source tree.
$ cmake -B .
- Build in a separate directory (parallel to the curl source tree in this
example). The build directory will be created for you.
example). The build directory is created for you.
$ cmake -B ../curl-build
@ -74,9 +74,9 @@ CMake comes with a curses based interface called `ccmake`. To run `ccmake`
on a curl use the instructions for the command line cmake, but substitute
`ccmake` for `cmake`.
This will bring up a curses interface with instructions on the bottom of the
screen. You can press the "c" key to configure the project, and the "g" key
to generate the project. After the project is generated, you can run make.
This brings up a curses interface with instructions on the bottom of the
screen. You can press the "c" key to configure the project, and the "g" key to
generate the project. After the project is generated, you can run make.
## Using `cmake-gui`

View File

@ -110,10 +110,10 @@ Figuring out all the dependency libraries for a given library is hard, as it
might involve figuring out the dependencies of the dependencies and they vary
between platforms and change between versions.
When using static dependencies, the build scripts will mostly assume that you,
the user, will provide all the necessary additional dependency libraries as
additional arguments in the build. With configure, by setting `LIBS` or
`LDFLAGS` on the command line.
When using static dependencies, the build scripts mostly assume that you, the
user, provide all the necessary additional dependency libraries as additional
arguments in the build. With configure, by setting `LIBS` or `LDFLAGS` on the
command line.
Building statically is not for the faint of heart.
@ -123,8 +123,8 @@ If you are a curl developer and use gcc, you might want to enable more debug
options with the `--enable-debug` option.
curl can be built to use a whole range of libraries to provide various useful
services, and configure will try to auto-detect a decent default. If you want
to alter it, you can select how to deal with each individual library.
services, and configure tries to auto-detect a decent default. If you want to
alter it, you can select how to deal with each individual library.
## Select TLS backend
@ -189,7 +189,7 @@ multi-threaded dynamic C runtime.
Almost identical to the Unix installation. Run the configure script in the
curl source tree root with `sh configure`. Make sure you have the `sh`
executable in `/bin/` or you will see the configure fail toward the end.
executable in `/bin/` or you see the configure fail toward the end.
Run `make`
@ -267,16 +267,16 @@ might yet need some additional adjustment.
## Important static libcurl usage note
When building an application that uses the static libcurl library on Windows,
you must add `-DCURL_STATICLIB` to your `CFLAGS`. Otherwise the linker will
look for dynamic import symbols.
you must add `-DCURL_STATICLIB` to your `CFLAGS`. Otherwise the linker looks
for dynamic import symbols.
## Legacy Windows and SSL
Schannel (from Windows SSPI), is the native SSL library in Windows. However,
Schannel in Windows <= XP is unable to connect to servers that
no longer support the legacy handshakes and algorithms used by those
versions. If you will be using curl in one of those earlier versions of
Windows you should choose another SSL backend such as OpenSSL.
Schannel in Windows <= XP is unable to connect to servers that no longer
support the legacy handshakes and algorithms used by those versions. If you
are using curl in one of those earlier versions of Windows you should choose
another SSL backend such as OpenSSL.
# Apple Platforms (macOS, iOS, tvOS, watchOS, and their simulator counterparts)
@ -285,10 +285,10 @@ implementation, Secure Transport, instead of OpenSSL. To build with Secure
Transport for SSL/TLS, use the configure option `--with-secure-transport`.
When Secure Transport is in use, the curl options `--cacert` and `--capath`
and their libcurl equivalents, will be ignored, because Secure Transport uses
the certificates stored in the Keychain to evaluate whether or not to trust
the server. This, of course, includes the root certificates that ship with the
OS. The `--cert` and `--engine` options, and their libcurl equivalents, are
and their libcurl equivalents, are ignored, because Secure Transport uses the
certificates stored in the Keychain to evaluate whether or not to trust the
server. This, of course, includes the root certificates that ship with the OS.
The `--cert` and `--engine` options, and their libcurl equivalents, are
currently unimplemented in curl with Secure Transport.
In general, a curl build for an Apple `ARCH/SDK/DEPLOYMENT_TARGET` combination
@ -307,7 +307,8 @@ make -j8
make install
```
Above will build curl for macOS platform with `x86_64` architecture and `10.8` as deployment target.
The above command lines build curl for macOS platform with `x86_64`
architecture and `10.8` as deployment target.
Here is an example for iOS device:
@ -367,8 +368,8 @@ to adjust those variables accordingly. After that you can build curl like this:
./configure --host aarch64-linux-android --with-pic --disable-shared
Note that this will not give you SSL/TLS support. If you need SSL/TLS, you
have to build curl against a SSL/TLS layer, e.g. OpenSSL, because it is
Note that this does not give you SSL/TLS support. If you need SSL/TLS, you
have to build curl with a SSL/TLS library, e.g. OpenSSL, because it is
impossible for curl to access Android's native SSL/TLS layer. To build curl
for Android using OpenSSL, follow the OpenSSL build instructions and then
install `libssl.a` and `libcrypto.a` to `$TOOLCHAIN/sysroot/usr/lib` and copy
@ -376,7 +377,7 @@ install `libssl.a` and `libcrypto.a` to `$TOOLCHAIN/sysroot/usr/lib` and copy
for Android using OpenSSL like this:
```bash
LIBS="-lssl -lcrypto -lc++" # For OpenSSL/BoringSSL. In general, you will need to the SSL/TLS layer's transitive dependencies if you are linking statically.
LIBS="-lssl -lcrypto -lc++" # For OpenSSL/BoringSSL. In general, you need to the SSL/TLS layer's transitive dependencies if you are linking statically.
./configure --host aarch64-linux-android --with-pic --disable-shared --with-openssl="$TOOLCHAIN/sysroot/usr"
```
@ -386,22 +387,22 @@ For IBM i (formerly OS/400), you can use curl in two different ways:
- Natively, running in the **ILE**. The obvious use is being able to call curl
from ILE C or RPG applications.
- You will need to build this from source. See `packages/OS400/README` for
the ILE specific build instructions.
- In the **PASE** environment, which runs AIX programs. curl will be built as
it would be on AIX.
- IBM provides builds of curl in their Yum repository for PASE software.
- To build from source, follow the Unix instructions.
- You need to build this from source. See `packages/OS400/README` for the ILE
specific build instructions.
- In the **PASE** environment, which runs AIX programs. curl is built as it
would be on AIX.
- IBM provides builds of curl in their Yum repository for PASE software.
- To build from source, follow the Unix instructions.
There are some additional limitations and quirks with curl on this platform;
they affect both environments.
## Multi-threading notes
By default, jobs in IBM i will not start with threading enabled. (Exceptions
By default, jobs in IBM i does not start with threading enabled. (Exceptions
include interactive PASE sessions started by `QP2TERM` or SSH.) If you use
curl in an environment without threading when options like asynchronous DNS
were enabled, you will get messages like:
were enabled, you get messages like:
```
getaddrinfo() thread failed to start
@ -446,9 +447,9 @@ export NM=ppc_405-nm
You may also need to provide a parameter like `--with-random=/dev/urandom` to
configure as it cannot detect the presence of a random number generating
device for a target system. The `--prefix` parameter specifies where curl
will be installed. If `configure` completes successfully, do `make` and `make
install` as usual.
device for a target system. The `--prefix` parameter specifies where curl gets
installed. If `configure` completes successfully, do `make` and `make install`
as usual.
In some cases, you may be able to simplify the above commands to as little as:
@ -471,7 +472,7 @@ due to improved optimization.
Be sure to specify as many `--disable-` and `--without-` flags on the
configure command-line as you can to disable all the libcurl features that you
know your application is not going to need. Besides specifying the
`--disable-PROTOCOL` flags for all the types of URLs your application will not
`--disable-PROTOCOL` flags for all the types of URLs your application do not
use, here are some other flags that can reduce the size of the library by
disabling support for some feature:
@ -533,12 +534,12 @@ Using these techniques it is possible to create a basic HTTP-only libcurl
shared library for i386 Linux platforms that is only 133 KiB in size
(as of libcurl version 7.80.0, using gcc 11.2.0).
You may find that statically linking libcurl to your application will result
in a lower total size than dynamically linking.
You may find that statically linking libcurl to your application results in a
lower total size than dynamically linking.
Note that the curl test harness can detect the use of some, but not all, of
the `--disable` statements suggested above. Use will cause tests relying on
those features to fail. The test harness can be manually forced to skip the
The curl test harness can detect the use of some, but not all, of the
`--disable` statements suggested above. Use of these can cause tests relying
on those features to fail. The test harness can be manually forced to skip the
relevant tests by specifying certain key words on the `runtests.pl` command
line. Following is a list of appropriate key words for those configure options
that are not automatically detected:

View File

@ -60,21 +60,22 @@ If the `IPFS_GATEWAY` environment variable is found, its value is used as
gateway.
### Automatic gateway detection
When you provide no additional details to cURL then cURL will:
1. First look for the `IPFS_GATEWAY` environment variable and use that if it
When you provide no additional details to cURL then it:
1. First looks for the `IPFS_GATEWAY` environment variable and use that if it
is set.
2. Look for the file: `~/.ipfs/gateway`. If it can find that file then it
2. Looks for the file: `~/.ipfs/gateway`. If it can find that file then it
means that you have a local gateway running and that file contains the URL
to your local gateway.
If cURL fails you will be presented with an error message and a link to this
page to the option most applicable to solving the issue.
If cURL fails, you are presented with an error message and a link to this page
to the option most applicable to solving the issue.
### `--ipfs-gateway` argument
You can also provide a `--ipfs-gateway` argument to cURL. This overrules any
other gateway setting. curl will not fallback to the other options if the
other gateway setting. curl does not fallback to the other options if the
provided gateway did not work.
## Gateway redirects
@ -91,11 +92,12 @@ Which would be translated to:
https://dweb.link/ipfs/bafybeigagd5nmnn2iys2f3doro7ydrevyr2mzarwidgadawmamiteydbzi
Will redirect to:
redirects to:
https://bafybeigagd5nmnn2iys2f3doro7ydrevyr2mzarwidgadawmamiteydbzi.ipfs.dweb.link
If you trust this behavior from your gateway of choice then passing the `-L` option will follow the redirect.
If you trust this behavior from your gateway of choice then passing the `-L`
option follows the redirect.
## Error messages and hints

View File

@ -68,8 +68,7 @@ Get a webpage and store in a local file with a specific name:
curl -o thatpage.html http://www.example.com/
Get a webpage and store in a local file, make the local file get the name of
the remote document (if no filename part is specified in the URL, this will
fail):
the remote document (if no filename part is specified in the URL, this fails):
curl -O http://www.example.com/index.html
@ -104,7 +103,7 @@ This is similar to FTP, but you can use the `--key` option to specify a
private key to use instead of a password. Note that the private key may itself
be protected by a password that is unrelated to the login password of the
remote system; this password is specified using the `--pass` option.
Typically, curl will automatically extract the public key from the private key
Typically, curl automatically extracts the public key from the private key
file, but in cases where curl does not have the proper library support, a
matching public key file must be specified using the `--pubkey` option.
@ -126,7 +125,7 @@ secure ones out of the ones that the server accepts for the given URL, by
using `--anyauth`.
**Note**! According to the URL specification, HTTP URLs can not contain a user
and password, so that style will not work when using curl via a proxy, even
and password, so that style does not work when using curl via a proxy, even
though curl allows it at other times. When using a proxy, you _must_ use the
`-u` style for user and password.
@ -161,7 +160,7 @@ specified as:
curl --noproxy example.com -x my-proxy:888 http://www.example.com/
If the proxy is specified with `--proxy1.0` instead of `--proxy` or `-x`, then
curl will use HTTP/1.0 instead of HTTP/1.1 for any `CONNECT` attempts.
curl uses HTTP/1.0 instead of HTTP/1.1 for any `CONNECT` attempts.
curl also supports SOCKS4 and SOCKS5 proxies with `--socks4` and `--socks5`.
@ -257,11 +256,11 @@ For other ways to do HTTP data upload, see the POST section below.
## Verbose / Debug
If curl fails where it is not supposed to, if the servers do not let you in, if
you cannot understand the responses: use the `-v` flag to get verbose
fetching. Curl will output lots of info and what it sends and receives in
order to let the user see all client-server interaction (but it will not show you
the actual data).
If curl fails where it is not supposed to, if the servers do not let you in,
if you cannot understand the responses: use the `-v` flag to get verbose
fetching. Curl outputs lots of info and what it sends and receives in order to
let the user see all client-server interaction (but it does not show you the
actual data).
curl -v ftp://ftp.example.com/
@ -283,7 +282,7 @@ extensive.
For HTTP, you can get the header information (the same as `-I` would show)
shown before the data by using `-i`/`--include`. Curl understands the
`-D`/`--dump-header` option when getting files from both FTP and HTTP, and it
will then store the headers in the specified file.
then stores the headers in the specified file.
Store the HTTP headers in a separate file (headers.txt in the example):
@ -354,9 +353,9 @@ different content types using the following syntax:
curl -F "coolfiles=@fil1.gif;type=image/gif,fil2.txt,fil3.html"
http://www.example.com/postit.cgi
If the content-type is not specified, curl will try to guess from the file
If the content-type is not specified, curl tries to guess from the file
extension (it only knows a few), or use the previously specified type (from an
earlier file if several files are specified in a list) or else it will use the
earlier file if several files are specified in a list) or else it uses the
default type `application/octet-stream`.
Emulate a fill-in form with `-F`. Let's say you fill in three fields in a
@ -475,11 +474,11 @@ non-existing file to trigger the cookie awareness like:
curl -L -b empty.txt www.example.com
The file to read cookies from must be formatted using plain HTTP headers OR as
Netscape's cookie file. Curl will determine what kind it is based on the file
contents. In the above command, curl will parse the header and store the
cookies received from www.example.com. curl will send to the server the stored
cookies which match the request as it follows the location. The file
`empty.txt` may be a nonexistent file.
Netscape's cookie file. Curl determines what kind it is based on the file
contents. In the above command, curl parses the header and store the cookies
received from www.example.com. curl sends the stored cookies which match the
request to the server as it follows the location. The file `empty.txt` may be
a nonexistent file.
To read and write cookies from a Netscape cookie file, you can set both `-b`
and `-c` to use the same file:
@ -511,8 +510,8 @@ From left-to-right:
- `Curr.Speed` - the average transfer speed the last 5 seconds (the first
5 seconds of a transfer is based on less time of course.)
The `-#` option will display a totally different progress bar that does not
need much explanation!
The `-#` option displays a totally different progress bar that does not need
much explanation!
## Speed Limit
@ -549,9 +548,9 @@ Or prevent curl from uploading data faster than 1 megabyte per second:
curl -T upload --limit-rate 1M ftp://uploads.example.com
When using the `--limit-rate` option, the transfer rate is regulated on a
per-second basis, which will cause the total transfer speed to become lower
than the given number. Sometimes of course substantially lower, if your
transfer stalls during periods.
per-second basis, which causes the total transfer speed to become lower than
the given number. Sometimes of course substantially lower, if your transfer
stalls during periods.
## Config File
@ -592,9 +591,9 @@ URL by making a config file similar to:
url = "http://help.with.curl.example.com/curlhelp.html"
You can specify another config file to be read by using the `-K`/`--config`
flag. If you set config filename to `-` it will read the config from stdin,
which can be handy if you want to hide options from being visible in process
tables etc:
flag. If you set config filename to `-` it reads the config from stdin, which
can be handy if you want to hide options from being visible in process tables
etc:
echo "user = user:passwd" | curl -K - http://that.secret.example.com
@ -707,8 +706,8 @@ personal password:
curl -E /path/to/cert.pem:password https://secure.example.com/
If you neglect to specify the password on the command line, you will be
prompted for the correct password before any data can be received.
If you neglect to specify the password on the command line, you are prompted
for the correct password before any data can be received.
Many older HTTPS servers have problems with specific SSL or TLS versions,
which newer versions of OpenSSL etc use, therefore it is sometimes useful to
@ -716,7 +715,7 @@ specify what TLS version curl should use.:
curl --tlv1.0 https://secure.example.com/
Otherwise, curl will attempt to use a sensible TLS default version.
Otherwise, curl attempts to use a sensible TLS default version.
## Resuming File Transfers
@ -783,11 +782,11 @@ Authentication support is still missing
## LDAP
If you have installed the OpenLDAP library, curl can take advantage of it and
offer `ldap://` support. On Windows, curl will use WinLDAP from Platform SDK
by default.
offer `ldap://` support. On Windows, curl uses WinLDAP from Platform SDK by
default.
Default protocol version used by curl is LDAP version 3. Version 2 will be
used as a fallback mechanism in case version 3 fails to connect.
Default protocol version used by curl is LDAP version 3. Version 2 is used as
a fallback mechanism in case version 3 fails to connect.
LDAP is a complex thing and writing an LDAP query is not an easy
task. Familiarize yourself with the exact syntax description elsewhere. One
@ -804,14 +803,14 @@ You also can use authentication when accessing LDAP catalog:
curl -u user:passwd "ldap://ldap.example.com/o=frontec??sub?mail=*"
curl "ldap://user:passwd@ldap.example.com/o=frontec??sub?mail=*"
By default, if user and password are provided, OpenLDAP/WinLDAP will use basic
By default, if user and password are provided, OpenLDAP/WinLDAP uses basic
authentication. On Windows you can control this behavior by providing one of
`--basic`, `--ntlm` or `--digest` option in curl command line
curl --ntlm "ldap://user:passwd@ldap.example.com/o=frontec??sub?mail=*"
On Windows, if no user/password specified, auto-negotiation mechanism will be
used with current logon credentials (SSPI/SPNEGO).
On Windows, if no user/password specified, auto-negotiation mechanism is used
with current logon credentials (SSPI/SPNEGO).
## Environment Variables
@ -830,7 +829,7 @@ set in (only an asterisk, `*` matches all hosts)
NO_PROXY
If the hostname matches one of these strings, or the host is within the domain
of one of these strings, transactions with that node will not be done over
of one of these strings, transactions with that node is not done over the
proxy. When a domain is used, it needs to start with a period. A user can
specify that both www.example.com and foo.example.com should not use a proxy
by setting `NO_PROXY` to `.example.com`. By including the full name you can
@ -845,7 +844,7 @@ Unix introduced the `.netrc` concept a long time ago. It is a way for a user
to specify name and password for commonly visited FTP sites in a file so that
you do not have to type them in each time you visit those sites. You realize
this is a big security risk if someone else gets hold of your passwords,
therefore most Unix programs will not read this file unless it is only readable
therefore most Unix programs do not read this file unless it is only readable
by yourself (curl does not care though).
Curl supports `.netrc` files if told to (using the `-n`/`--netrc` and
@ -877,8 +876,8 @@ Then use curl in way similar to:
curl --krb private ftp://krb4site.example.com -u username:fakepwd
There is no use for a password on the `-u` switch, but a blank one will make
curl ask for one and you already entered the real password to `kinit`/`kauth`.
There is no use for a password on the `-u` switch, but a blank one makes curl
ask for one and you already entered the real password to `kinit`/`kauth`.
## TELNET
@ -888,8 +887,8 @@ command line similar to:
curl telnet://remote.example.com
Enter the data to pass to the server on stdin. The result will be sent to
stdout or to the file you specify with `-o`.
Enter the data to pass to the server on stdin. The result is sent to stdout or
to the file you specify with `-o`.
You might want the `-N`/`--no-buffer` option to switch off the buffered output
for slow connections or similar.
@ -911,20 +910,20 @@ accordingly.
## Persistent Connections
Specifying multiple files on a single command line will make curl transfer all
of them, one after the other in the specified order.
Specifying multiple files on a single command line makes curl transfer all of
them, one after the other in the specified order.
libcurl will attempt to use persistent connections for the transfers so that
the second transfer to the same host can use the same connection that was
already initiated and was left open in the previous transfer. This greatly
decreases connection time for all but the first transfer and it makes a far
better use of the network.
libcurl attempts to use persistent connections for the transfers so that the
second transfer to the same host can use the same connection that was already
initiated and was left open in the previous transfer. This greatly decreases
connection time for all but the first transfer and it makes a far better use
of the network.
Note that curl cannot use persistent connections for transfers that are used
in subsequent curl invokes. Try to stuff as many URLs as possible on the same
command line if they are using the same host, as that will make the transfers
command line if they are using the same host, as that makes the transfers
faster. If you use an HTTP proxy for file transfers, practically all transfers
will be persistent.
are persistent.
## Multiple Transfers With A Single Command Line
@ -945,11 +944,10 @@ You can also upload multiple files in a similar fashion:
## IPv6
curl will connect to a server with IPv6 when a host lookup returns an IPv6
address and fall back to IPv4 if the connection fails. The `--ipv4` and
`--ipv6` options can specify which address to use when both are
available. IPv6 addresses can also be specified directly in URLs using the
syntax:
curl connects to a server with IPv6 when a host lookup returns an IPv6 address
and fall back to IPv4 if the connection fails. The `--ipv4` and `--ipv6`
options can specify which address to use when both are available. IPv6
addresses can also be specified directly in URLs using the syntax:
http://[2001:1890:1112:1::20]/overview.html

View File

@ -24,4 +24,4 @@ Remaining limitations:
- Only QoS level 0 is implemented for publish
- No way to set retain flag for publish
- No TLS (mqtts) support
- Naive EAGAIN handling will not handle split messages
- Naive EAGAIN handling does not handle split messages

View File

@ -13,10 +13,10 @@ How do you proceed to add a new protocol and what are the requirements?
This document is an attempt to describe things to consider. There is no
checklist of the twenty-seven things you need to cross off. We view the entire
effort as a whole and then judge if it seems to be the right thing - for
now. The more things that look right, fit our patterns and are done in ways
that align with our thinking, the better are the chances that we will agree
that supporting this protocol is a grand idea.
effort as a whole and then judge if it seems to be the right thing - for now.
The more things that look right, fit our patterns and are done in ways that
align with our thinking, the better are the chances that we agree that
supporting this protocol is a grand idea.
## Mutual benefit is preferred
@ -93,18 +93,18 @@ protocol - but it might require a bit of an effort to make it happen.
We cannot assume that users are particularly familiar with details and
peculiarities of the protocol. It needs documentation.
Maybe it even needs some internal documentation so that the developers who
will try to debug something five years from now can figure out functionality a
little easier!
Maybe it even needs some internal documentation so that the developers who try
to debug something five years from now can figure out functionality a little
easier!
The protocol specification itself should be freely available without requiring
a non-disclosure agreement or similar.
## Do not compare
We are constantly raising the bar and we are constantly improving the
project. A lot of things we did in the past would not be acceptable if done
today. Therefore, you might be tempted to use shortcuts or "hacks" you can
spot other - existing - protocol implementations have used, but there is
nothing to gain from that. The bar has been raised. Former "cheats" will not be
tolerated anymore.
We are constantly raising the bar and we are constantly improving the project.
A lot of things we did in the past would not be acceptable if done today.
Therefore, you might be tempted to use shortcuts or "hacks" you can spot
other - existing - protocol implementations have used, but there is nothing to
gain from that. The bar has been raised. Former "cheats" may not tolerated
anymore.

View File

@ -5,9 +5,9 @@ parallel.
## -Z, --parallel
When this command line option is used, curl will perform the transfers given
to it at the same time. It will do up to `--parallel-max` concurrent
transfers, with a default value of 50.
When this command line option is used, curl performs the transfers given to it
at the same time. It does up to `--parallel-max` concurrent transfers, with a
default value of 50.
## Progress meter

View File

@ -2,9 +2,9 @@
# Documentation
you will find a mix of various documentation in this directory and
subdirectories, using several different formats. Some of them are not ideal
for reading directly in your browser.
You find a mix of various documentation in this directory and subdirectories,
using several different formats. Some of them are not ideal for reading
directly in your browser.
If you would rather see the rendered version of the documentation, check out the
curl website's [documentation section](https://curl.se/docs/) for

View File

@ -17,9 +17,8 @@ in the source code repo
the tag is GPG signed (using -s).
- run `./maketgz 7.34.0` to build the release tarballs. It is important that
you run this on a machine with the correct set of autotools etc installed
as this is what then will be shipped and used by most users on \*nix like
systems.
you run this on a machine with the correct set of autotools etc installed as
this is what is shipped and used by most users on \*nix like systems.
- push the git commits and the new tag

View File

@ -48,8 +48,8 @@ The easy way is to start with a recent previously published advisory and just
blank out old texts and save it using a new name. Save the subtitles and
general layout.
Some details and metadata will be extracted from this document so it is
important to stick to the existing format.
Some details and metadata are extracted from this document so it is important
to stick to the existing format.
The first list must be the title of the issue.

View File

@ -22,14 +22,14 @@
## CA bundle missing intermediate certificates
When using said CA bundle to verify a server cert, you will experience
When using said CA bundle to verify a server cert, you may experience
problems if your CA store does not contain the certificates for the
intermediates if the server does not provide them.
The TLS protocol mandates that the intermediate certificates are sent in the
handshake, but as browsers have ways to survive or work around such
omissions, missing intermediates in TLS handshakes still happen that
browser-users will not notice.
omissions, missing intermediates in TLS handshakes still happen that browser
users do not notice.
Browsers work around this problem in two ways: they cache intermediate
certificates from previous transfers and some implement the TLS "AIA"

View File

@ -123,26 +123,26 @@ server, do one of the following:
Neglecting to use one of the above methods when dealing with a server using a
certificate that is not signed by one of the certificates in the installed CA
certificate store, will cause SSL to report an error ("certificate verify
failed") during the handshake and SSL will then refuse further communication
with that server.
certificate store, causes SSL to report an error (`certificate verify failed`)
during the handshake and SSL then refuses further communication with that
server.
Certificate Verification with Schannel and Secure Transport
-----------------------------------------------------------
If libcurl was built with Schannel (Microsoft's native TLS engine) or Secure
Transport (Apple's native TLS engine) support, then libcurl will still perform
peer certificate verification, but instead of using a CA cert bundle, it will
use the certificates that are built into the OS. These are the same
certificates that appear in the Internet Options control panel (under Windows)
or Keychain Access application (under OS X). Any custom security rules for
certificates will be honored.
Transport (Apple's native TLS engine) support, then libcurl still performs
peer certificate verification, but instead of using a CA cert bundle, it uses
the certificates that are built into the OS. These are the same certificates
that appear in the Internet Options control panel (under Windows) or Keychain
Access application (under OS X). Any custom security rules for certificates
are honored.
Schannel will run CRL checks on certificates unless peer verification is
disabled. Secure Transport on iOS will run OCSP checks on certificates unless
peer verification is disabled. Secure Transport on OS X will run either OCSP
or CRL checks on certificates if those features are enabled, and this behavior
can be adjusted in the preferences of Keychain Access.
Schannel runs CRL checks on certificates unless peer verification is disabled.
Secure Transport on iOS runs OCSP checks on certificates unless peer
verification is disabled. Secure Transport on OS X runs either OCSP or CRL
checks on certificates if those features are enabled, and this behavior can be
adjusted in the preferences of Keychain Access.
HTTPS proxy
-----------

View File

@ -10,10 +10,9 @@
web servers are all important tasks today.
Curl is a command line tool for doing all sorts of URL manipulations and
transfers, but this particular document will focus on how to use it when
doing HTTP requests for fun and profit. This documents assumes that you know
how to invoke `curl --help` or `curl --manual` to get basic information about
it.
transfers, but this particular document focuses on how to use it when doing
HTTP requests for fun and profit. This documents assumes that you know how to
invoke `curl --help` or `curl --manual` to get basic information about it.
Curl is not written to do everything for you. It makes the requests, it gets
the data, it sends data and it retrieves the information. You probably need
@ -24,8 +23,8 @@
HTTP is the protocol used to fetch data from web servers. It is a simple
protocol that is built upon TCP/IP. The protocol also allows information to
get sent to the server from the client using a few different methods, as will
be shown here.
get sent to the server from the client using a few different methods, as is
shown here.
HTTP is plain ASCII text lines being sent by the client to a server to
request a particular action, and then the server replies a few text lines
@ -39,9 +38,9 @@
## See the Protocol
Using curl's option [`--verbose`](https://curl.se/docs/manpage.html#-v)
(`-v` as a short option) will display what kind of commands curl sends to the
server, as well as a few other informational texts.
Using curl's option [`--verbose`](https://curl.se/docs/manpage.html#-v) (`-v`
as a short option) displays what kind of commands curl sends to the server,
as well as a few other informational texts.
`--verbose` is the single most useful option when it comes to debug or even
understand the curl<->server interaction.
@ -59,19 +58,19 @@
Many times you may wonder what exactly is taking all the time, or you just
want to know the amount of milliseconds between two points in a transfer. For
those, and other similar situations, the
[`--trace-time`](https://curl.se/docs/manpage.html#--trace-time) option
is what you need. It will prepend the time to each trace output line:
[`--trace-time`](https://curl.se/docs/manpage.html#--trace-time) option is
what you need. It prepends the time to each trace output line:
curl --trace-ascii d.txt --trace-time http://example.com/
## See which Transfer
When doing parallel transfers, it is relevant to see which transfer is
doing what. When response headers are received (and logged) you need to
know which transfer these are for.
[`--trace-ids`](https://curl.se/docs/manpage.html#--trace-ids) option
is what you need. It will prepend the transfer and connection identifier
to each trace output line:
When doing parallel transfers, it is relevant to see which transfer is doing
what. When response headers are received (and logged) you need to know which
transfer these are for.
[`--trace-ids`](https://curl.se/docs/manpage.html#--trace-ids) option is what
you need. It prepends the transfer and connection identifier to each trace
output line:
curl --trace-ascii d.txt --trace-ids http://example.com/
@ -92,7 +91,7 @@
## Host
The hostname is usually resolved using DNS or your /etc/hosts file to an IP
address and that is what curl will communicate with. Alternatively you specify
address and that is what curl communicates with. Alternatively you specify
the IP address directly in the URL instead of a name.
For development and other trying out situations, you can point to a different
@ -165,9 +164,9 @@
## HEAD
You can ask the remote server for ONLY the headers by using the
[`--head`](https://curl.se/docs/manpage.html#-I) (`-I`) option which
will make curl issue a HEAD request. In some special cases servers deny the
HEAD method while others still work, which is a particular kind of annoyance.
[`--head`](https://curl.se/docs/manpage.html#-I) (`-I`) option which makes
curl issue a HEAD request. In some special cases servers deny the HEAD method
while others still work, which is a particular kind of annoyance.
The HEAD method is defined and made so that the server returns the headers
exactly the way it would do for a GET, but without a body. It means that you
@ -177,9 +176,9 @@
## Multiple URLs in a single command line
A single curl command line may involve one or many URLs. The most common case
is probably to just use one, but you can specify any amount of URLs. Yes
any. No limits. You will then get requests repeated over and over for all the
given URLs.
is probably to just use one, but you can specify any amount of URLs. Yes any.
No limits. You then get requests repeated over and over for all the given
URLs.
Example, send two GET requests:
@ -197,14 +196,14 @@
## Multiple HTTP methods in a single command line
Sometimes you need to operate on several URLs in a single command line and do
different HTTP methods on each. For this, you will enjoy the
[`--next`](https://curl.se/docs/manpage.html#-:) option. It is basically
a separator that separates a bunch of options from the next. All the URLs
before `--next` will get the same method and will get all the POST data
merged into one.
different HTTP methods on each. For this, you might enjoy the
[`--next`](https://curl.se/docs/manpage.html#-:) option. It is basically a
separator that separates a bunch of options from the next. All the URLs
before `--next` get the same method and get all the POST data merged into
one.
When curl reaches the `--next` on the command line, it will sort of reset the
method and the POST data and allow a new set.
When curl reaches the `--next` on the command line, it resets the method and
the POST data and allow a new set.
Perhaps this is best shown with a few examples. To send first a HEAD and then
a GET:
@ -241,14 +240,14 @@
</form>
```
In your favorite browser, this form will appear with a text box to fill in
and a press-button labeled "OK". If you fill in '1905' and press the OK
button, your browser will then create a new URL to get for you. The URL will
get `junk.cgi?birthyear=1905&press=OK` appended to the path part of the
previous URL.
In your favorite browser, this form appears with a text box to fill in and a
press-button labeled "OK". If you fill in '1905' and press the OK button,
your browser then creates a new URL to get for you. The URL gets
`junk.cgi?birthyear=1905&press=OK` appended to the path part of the previous
URL.
If the original form was seen on the page `www.example.com/when/birth.html`,
the second page you will get will become
the second page you get becomes
`www.example.com/when/junk.cgi?birthyear=1905&press=OK`.
Most search engines work this way.
@ -267,7 +266,7 @@
amount of fields creating a long and unreadable URL.
The HTTP protocol then offers the POST method. This way the client sends the
data separated from the URL and thus you will not see any of it in the URL
data separated from the URL and thus you do not see any of it in the URL
address field.
The form would look similar to the previous one:
@ -284,21 +283,20 @@
curl --data "birthyear=1905&press=%20OK%20" http://www.example.com/when/junk.cgi
This kind of POST will use the Content-Type
`application/x-www-form-urlencoded` and is the most widely used POST kind.
This kind of POST uses the Content-Type `application/x-www-form-urlencoded`
and is the most widely used POST kind.
The data you send to the server MUST already be properly encoded, curl will
The data you send to the server MUST already be properly encoded, curl does
not do that for you. For example, if you want the data to contain a space,
you need to replace that space with `%20`, etc. Failing to comply with this will
most likely cause your data to be received wrongly and messed up.
you need to replace that space with `%20`, etc. Failing to comply with this
most likely causes your data to be received wrongly and messed up.
Recent curl versions can in fact url-encode POST data for you, like this:
curl --data-urlencode "name=I am Daniel" http://www.example.com
If you repeat `--data` several times on the command line, curl will
concatenate all the given data pieces - and put a `&` symbol between each
data segment.
If you repeat `--data` several times on the command line, curl concatenates
all the given data pieces - and put a `&` symbol between each data segment.
## File Upload POST
@ -339,7 +337,7 @@
</form>
```
To POST this with curl, you will not have to think about if the fields are
To POST this with curl, you do not have to think about if the fields are
hidden or not. To curl they are all the same:
curl --data "birthyear=1905&press=OK&person=daniel" [URL]
@ -354,7 +352,7 @@
your local disk, modify the 'method' to a GET, and press the submit button
(you could also change the action URL if you want to).
You will then clearly see the data get appended to the URL, separated with a
You then clearly see the data get appended to the URL, separated with a
`?`-letter as GET forms are supposed to.
# HTTP upload
@ -409,7 +407,7 @@
[`--proxy-digest`](https://curl.se/docs/manpage.html#--proxy-digest).
If you use any one of these user+password options but leave out the password
part, curl will prompt for the password interactively.
part, curl prompts for the password interactively.
## Hiding credentials
@ -419,7 +417,7 @@
options. There are ways to circumvent this.
It is worth noting that while this is how HTTP Authentication works, many
websites will not use this concept when they provide logins etc. See the Web
websites do not use this concept when they provide logins etc. See the Web
Login chapter further below for more details on that.
# More HTTP Headers
@ -447,10 +445,10 @@
make them look the best possible for their particular browsers. They usually
also do different kinds of JavaScript etc.
At times, you will see that getting a page with curl will not return the same
page that you see when getting the page with your browser. Then you know it
is time to set the User Agent field to fool the server into thinking you are
one of those browsers.
At times, you may learn that getting a page with curl does not return the
same page that you see when getting the page with your browser. Then you know
it is time to set the User Agent field to fool the server into thinking you
are one of those browsers.
To make curl look like Internet Explorer 5 on a Windows 2000 box:
@ -469,20 +467,18 @@
new page keeping newly generated output. The header that tells the browser to
redirect is `Location:`.
Curl does not follow `Location:` headers by default, but will simply display
such pages in the same manner it displays all HTTP replies. It does however
feature an option that will make it attempt to follow the `Location:`
pointers.
Curl does not follow `Location:` headers by default, but simply displays such
pages in the same manner it displays all HTTP replies. It does however
feature an option that makes it attempt to follow the `Location:` pointers.
To tell curl to follow a Location:
curl --location http://www.example.com
If you use curl to POST to a site that immediately redirects you to another
page, you can safely use
[`--location`](https://curl.se/docs/manpage.html#-L) (`-L`) and
`--data`/`--form` together. Curl will only use POST in the first request, and
then revert to GET in the following operations.
page, you can safely use [`--location`](https://curl.se/docs/manpage.html#-L)
(`-L`) and `--data`/`--form` together. Curl only uses POST in the first
request, and then revert to GET in the following operations.
## Other redirects
@ -549,7 +545,7 @@
format that Netscape and Mozilla once used. It is a convenient way to share
cookies between scripts or invokes. The `--cookie` (`-b`) switch
automatically detects if a given file is such a cookie file and parses it,
and by using the `--cookie-jar` (`-c`) option you will make curl write a new
and by using the `--cookie-jar` (`-c`) option you make curl write a new
cookie file at the end of an operation:
curl --cookie cookies.txt --cookie-jar newcookies.txt \
@ -568,7 +564,7 @@
advanced features to do secure transfers over HTTP.
Curl supports encrypted fetches when built to use a TLS library and it can be
built to use one out of a fairly large set of libraries - `curl -V` will show
built to use one out of a fairly large set of libraries - `curl -V` shows
which one your curl was built to use (if any!). To get a page from an HTTPS
server, simply run curl like:
@ -586,11 +582,10 @@
curl --cert mycert.pem https://secure.example.com
curl also tries to verify that the server is who it claims to be, by
verifying the server's certificate against a locally stored CA cert
bundle. Failing the verification will cause curl to deny the connection. You
must then use [`--insecure`](https://curl.se/docs/manpage.html#-k)
(`-k`) in case you want to tell curl to ignore that the server cannot be
verified.
verifying the server's certificate against a locally stored CA cert bundle.
Failing the verification causes curl to deny the connection. You must then
use [`--insecure`](https://curl.se/docs/manpage.html#-k) (`-k`) in case you
want to tell curl to ignore that the server cannot be verified.
More about server certificate verification and ca cert bundles can be read in
the [`SSLCERTS` document](https://curl.se/docs/sslcerts.html).
@ -627,18 +622,18 @@
## More on changed methods
It should be noted that curl selects which methods to use on its own
depending on what action to ask for. `-d` will do POST, `-I` will do HEAD and
depending on what action to ask for. `-d` makes a POST, `-I` makes a HEAD and
so on. If you use the [`--request`](https://curl.se/docs/manpage.html#-X) /
`-X` option you can change the method keyword curl selects, but you will not
`-X` option you can change the method keyword curl selects, but you do not
modify curl's behavior. This means that if you for example use -d "data" to
do a POST, you can modify the method to a `PROPFIND` with `-X` and curl will
still think it sends a POST. You can change the normal GET to a POST method
by simply adding `-X POST` in a command line like:
do a POST, you can modify the method to a `PROPFIND` with `-X` and curl still
thinks it sends a POST. You can change the normal GET to a POST method by
simply adding `-X POST` in a command line like:
curl -X POST http://example.org/
curl will however still act as if it sent a GET so it will not send any
request body etc.
curl however still acts as if it sent a GET so it does not send any request
body etc.
# Web Login
@ -649,13 +644,13 @@
login forms work and how to login to them using curl.
It can also be noted that to do this properly in an automated fashion, you
will most certainly need to script things and do multiple curl invokes etc.
most certainly need to script things and do multiple curl invokes etc.
First, servers mostly use cookies to track the logged-in status of the
client, so you will need to capture the cookies you receive in the
responses. Then, many sites also set a special cookie on the login page (to
make sure you got there through their login page) so you should make a habit
of first getting the login-form page to capture the cookies set there.
client, so you need to capture the cookies you receive in the responses.
Then, many sites also set a special cookie on the login page (to make sure
you got there through their login page) so you should make a habit of first
getting the login-form page to capture the cookies set there.
Some web-based login systems feature various amounts of JavaScript, and
sometimes they use such code to set or modify cookie contents. Possibly they
@ -675,7 +670,7 @@
## Some debug tricks
Many times when you run curl on a site, you will notice that the site does not
Many times when you run curl on a site, you notice that the site does not
seem to respond the same way to your curl requests as it does to your
browser's.

View File

@ -30,8 +30,8 @@ same behavior!
For example, if you use one parser to check if a URL uses a good hostname or
the correct auth field, and then pass on that same URL to a *second* parser,
there will always be a risk it treats the same URL differently. There is no
right and wrong in URL land, only differences of opinions.
there is always a risk it treats the same URL differently. There is no right
and wrong in URL land, only differences of opinions.
libcurl offers a separate API to its URL parser for this reason, among others.
@ -55,8 +55,8 @@ security concerns:
## "RFC 3986 plus"
curl recognizes a URL syntax that we call "RFC 3986 plus". It is grounded on
the well established RFC 3986 to make sure previously written command lines and
curl using scripts will remain working.
the well established RFC 3986 to make sure previously written command lines
and curl using scripts remain working.
curl's URL parser allows a few deviations from the spec in order to
inter-operate better with URLs that appear in the wild.
@ -92,8 +92,7 @@ curl supports "URLs" that do not start with a scheme. This is not supported by
any of the specifications. This is a shortcut to entering URLs that was
supported by browsers early on and has been mimicked by curl.
Based on what the hostname starts with, curl will "guess" what protocol to
use:
Based on what the hostname starts with, curl "guesses" what protocol to use:
- `ftp.` means FTP
- `dict.` means DICT
@ -201,8 +200,8 @@ If there is a colon after the hostname, that should be followed by the port
number to use. 1 - 65535. curl also supports a blank port number field - but
only if the URL starts with a scheme.
If the port number is not specified in the URL, curl will used a default port
based on the provide scheme:
If the port number is not specified in the URL, curl uses a default port
number based on the provide scheme:
DICT 2628, FTP 21, FTPS 990, GOPHER 70, GOPHERS 70, HTTP 80, HTTPS 443,
IMAP 132, IMAPS 993, LDAP 369, LDAPS 636, MQTT 1883, POP3 110, POP3S 995,
@ -216,7 +215,7 @@ SMTP 25, SMTPS 465, TELNET 23, TFTP 69
The path part of an FTP request specifies the file to retrieve and from which
directory. If the file part is omitted then libcurl downloads the directory
listing for the directory specified. If the directory is omitted then the
directory listing for the root / home directory will be returned.
directory listing for the root / home directory is returned.
FTP servers typically put the user in its "home directory" after login, which
then differs between users. To explicitly specify the root directory of an FTP
@ -231,14 +230,14 @@ to read or write such a path.
curl only allows the hostname part of a FILE URL to be one out of these three
alternatives: `localhost`, `127.0.0.1` or blank ("", zero characters).
Anything else will make curl fail to parse the URL.
Anything else makes curl fail to parse the URL.
### Windows-specific FILE details
curl accepts that the FILE URL's path starts with a "drive letter". That is a
single letter `a` to `z` followed by a colon or a pipe character (`|`).
The Windows operating system itself will convert some file accesses to perform
The Windows operating system itself converts some file accesses to perform
network accesses over SMB/CIFS, through several different file path patterns.
This way, a `file://` URL passed to curl *might* be converted into a network
access inadvertently and unknowingly to curl. This is a Windows feature curl
@ -321,7 +320,7 @@ Search for the `DN` as `My Organization`:
ldap://ldap.example.com/o=My%20Organization
the same search but will only return `postalAddress` attributes:
the same search but only return `postalAddress` attributes:
ldap://ldap.example.com/o=My%20Organization?postalAddress
@ -352,7 +351,7 @@ To specify a path relative to the user's home directory on the server, prepend
The path part of an SFTP URL specifies the file to retrieve or upload. If the
path ends with a slash (`/`) then a directory listing is returned instead of a
file. If the path is omitted entirely then the directory listing for the root
/ home directory will be returned.
/ home directory is returned.
## SMB
The path part of a SMB request specifies the file to retrieve and from what
@ -368,8 +367,8 @@ curl supports SMB version 1 (only)
## SMTP
The path part of a SMTP request specifies the hostname to present during
communication with the mail server. If the path is omitted, then libcurl will
attempt to resolve the local computer's hostname. However, this may not
communication with the mail server. If the path is omitted, then libcurl
attempts to resolve the local computer's hostname. However, this may not
return the fully qualified domain name that is required by some mail servers
and specifying this path allows you to set an alternative name, such as your
machine's fully qualified domain name, which you might have obtained from an
@ -385,7 +384,6 @@ traditional URL, followed by a space and a series of space-separated
`name=value` pairs.
While space is not typically a "legal" letter, libcurl accepts them. When a
user wants to pass in a `#` (hash) character it will be treated as a fragment
and get cut off by libcurl if provided literally. You will instead have to
escape it by providing it as backslash and its ASCII value in hexadecimal:
`\23`.
user wants to pass in a `#` (hash) character it is treated as a fragment and
it gets cut off by libcurl if provided literally. You have to escape it by
providing it as backslash and its ASCII value in hexadecimal: `\23`.

View File

@ -14,11 +14,11 @@ Version Numbers and Releases
## Bumping numbers
One of these numbers will get bumped in each new release. The numbers to the
right of a bumped number will be reset to zero.
One of these numbers get bumped in each new release. The numbers to the right
of a bumped number are reset to zero.
The main version number will get bumped when *really* big, world colliding
changes are made. The release number is bumped when changes are performed or
The main version number is bumped when *really* big, world colliding changes
are made. The release number is bumped when changes are performed or
things/features are added. The patch number is bumped when the changes are
mere bugfixes.

View File

@ -17,8 +17,8 @@ The typical process for handling a new security vulnerability is as follows.
No information should be made public about a vulnerability until it is
formally announced at the end of this process. That means, for example, that a
bug tracker entry must NOT be created to track the issue since that will make
the issue public and it should not be discussed on any of the project's public
bug tracker entry must NOT be created to track the issue since that makes the
issue public and it should not be discussed on any of the project's public
mailing lists. Messages associated with any commits should not make any
reference to the security nature of the commit if done prior to the public
announcement.
@ -108,7 +108,7 @@ its way of working. You must have been around for a good while and you should
have no plans of vanishing in the near future.
We do not make the list of participants public mostly because it tends to vary
somewhat over time and a list somewhere will only risk getting outdated.
somewhat over time and a list somewhere only risks getting outdated.
## Publishing Security Advisories
@ -255,8 +255,8 @@ data. We consider this functionality a best-effort and omissions are not
security vulnerabilities.
- not all systems allow the arguments to be blanked in the first place
- since curl blanks the argument itself they will be readable for a short
moment no matter what
- since curl blanks the argument itself they area readable for a short moment
no matter what
- virtually every argument can contain sensitive data, depending on use
- blanking all arguments would make it impractical for users to differentiate
curl command lines in process listings

View File

@ -13,8 +13,7 @@ using the `ws://` or `wss://` URL schemes. The latter one being the secure
version done over HTTPS.
When using `wss://` to do WebSocket over HTTPS, the standard TLS and HTTPS
options will be acknowledged for the CA, verification of server certificate
etc.
options are acknowledged for the CA, verification of server certificate etc.
WebSocket communication is done by upgrading a connection from either HTTP or
HTTPS. When given a WebSocket URL to work with, libcurl considers it a
@ -64,7 +63,7 @@ directions.
If the given WebSocket URL (using `ws://` or `wss://`) fails to get upgraded
via a 101 response code and instead gets another response code back from the
HTTP server - the transfer will return `CURLE_HTTP_RETURNED_ERROR` for that
HTTP server - the transfer returns `CURLE_HTTP_RETURNED_ERROR` for that
transfer. Note then that even 2xx response codes are then considered error
since it failed to provide a WebSocket transfer.

View File

@ -38,10 +38,10 @@ libcurl. Currently that is only the include path to the curl include files.
## --checkfor [version]
Specify the oldest possible libcurl version string you want, and this
script will return 0 if the current installation is new enough or it
returns 1 and outputs a text saying that the current version is not new
enough. (Added in 7.15.4)
Specify the oldest possible libcurl version string you want, and this script
returns 0 if the current installation is new enough or it returns 1 and
outputs a text saying that the current version is not new enough. (Added in
7.15.4)
## --configure
@ -51,7 +51,7 @@ Displays the arguments given to configure when building curl.
Lists what particular main features the installed libcurl was built with. At
the time of writing, this list may include SSL, KRB4 or IPv6. Do not assume
any particular order. The keywords will be separated by newlines. There may be
any particular order. The keywords are separated by newlines. There may be
none, one, or several keywords in the list.
## --help
@ -60,8 +60,8 @@ Displays the available options.
## --libs
Shows the complete set of libs and other linker options you will need in order
to link your application with libcurl.
Shows the complete set of libs and other linker options you need in order to
link your application with libcurl.
## --prefix
@ -74,19 +74,19 @@ on. The prefix is set with "configure --prefix".
Lists what particular protocols the installed libcurl was built to support. At
the time of writing, this list may include HTTP, HTTPS, FTP, FTPS, FILE,
TELNET, LDAP, DICT and many more. Do not assume any particular order. The
protocols will be listed using uppercase and are separated by newlines. There
may be none, one, or several protocols in the list. (Added in 7.13.0)
protocols are listed using uppercase and are separated by newlines. There may
be none, one, or several protocols in the list. (Added in 7.13.0)
## --ssl-backends
Lists the SSL backends that were enabled when libcurl was built. It might be
no, one or several names. If more than one name, they will appear
comma-separated. (Added in 7.58.0)
no, one or several names. If more than one name, they appear comma-separated.
(Added in 7.58.0)
## --static-libs
Shows the complete set of libs and other linker options you will need in order
to link your application with libcurl statically. (Added in 7.17.1)
Shows the complete set of libs and other linker options you need in order to
link your application with libcurl statically. (Added in 7.17.1)
## --version

View File

@ -28,13 +28,13 @@ ABI - Application Binary Interface
## SONAME Bumps
Whenever there are changes done to the library that will cause an ABI
breakage, that may require your application to get attention or possibly be
changed to adhere to new things, we will bump the SONAME. Then the library
will get a different output name and thus can in fact be installed in
parallel with an older installed lib (on most systems). Thus, old
applications built against the previous ABI version will remain working and
using the older lib, while newer applications build and use the newer one.
Whenever there are changes done to the library that causes an ABI breakage,
that may require your application to get attention or possibly be changed to
adhere to new things, we bump the SONAME. Then the library gets a different
output name and thus can in fact be installed in parallel with an older
installed lib (on most systems). Thus, old applications built against the
previous ABI version remains working and using the older lib, while newer
applications build and use the newer one.
During the first seven years of libcurl releases, there have only been four
ABI breakages.
@ -46,7 +46,7 @@ ABI - Application Binary Interface
Going to an older libcurl version from one you are currently using can be a
tricky thing. Mostly we add features and options to newer libcurls as that
will not break ABI or hamper existing applications. This has the implication
does not break ABI or hamper existing applications. This has the implication
that going backwards may get you in a situation where you pick a libcurl that
does not support the options your application needs. Or possibly you even
downgrade so far so you cross an ABI break border and thus a different

View File

@ -32,9 +32,9 @@ the **http_proxy** one which is only used lowercase. Note also that some
systems actually have a case insensitive handling of environment variables and
then of course **HTTP_PROXY** still works.
An exception exists for the WebSocket **ws** and **wss** URL schemes,
where libcurl first checks **ws_proxy** or **wss_proxy** but if they are
not set, it will fall back and try the http and https versions instead if set.
An exception exists for the WebSocket **ws** and **wss** URL schemes, where
libcurl first checks **ws_proxy** or **wss_proxy** but if they are not set, it
falls back and tries the http and https versions instead if set.
## `ALL_PROXY`

View File

@ -365,7 +365,7 @@ hard to avoid.
# Active FTP passes on the local IP address
If you use curl/libcurl to do *active* FTP transfers, curl will pass on the
If you use curl/libcurl to do *active* FTP transfers, curl passes on the
address of your local IP to the remote server - even when for example using a
SOCKS or HTTP proxy in between curl and the target server.

View File

@ -25,7 +25,7 @@ authentication certificates are extracted. These are then processed with the
OpenSSL command line tool to produce the final ca-bundle output file.
The default *output* name is **ca-bundle.crt**. By setting it to '-' (a single
dash) you will get the output sent to STDOUT instead of a file.
dash) you get the output sent to STDOUT instead of a file.
The PEM format this scripts uses for output makes the result readily available
for use by just about all OpenSSL or GnuTLS powered applications, such as curl
@ -56,8 +56,8 @@ print version info about used modules
## -k
Allow insecure data transfer. By default (since 1.27) this command will fail
if the HTTPS transfer fails. This overrides that decision (and opens for
Allow insecure data transfer. By default (since 1.27) this command fails if
the HTTPS transfer fails. This overrides that decision (and opens for
man-in-the-middle attacks).
## -l
@ -68,8 +68,8 @@ print license info about *certdata.txt*
(Added in 1.26) Include meta data comments in the output. The meta data is
specific information about each certificate that is stored in the original
file as comments and using this option will make those comments get passed on
to the output file. The meta data is not parsed in any way by mk-ca-bundle.
file as comments and using this option makes those comments get passed on to
the output file. The meta data is not parsed in any way by mk-ca-bundle.
## -n