docs: reduce/avoid English contractions

You're => You are
Hasn't => Has not
Doesn't => Does not
Don't => Do not
You'll => You will
etc

Closes #7930
This commit is contained in:
Daniel Stenberg 2021-10-31 16:34:44 +01:00
parent d3d079c138
commit a28464ae77
No known key found for this signature in database
GPG Key ID: 5CC908FDB71E12C2
265 changed files with 926 additions and 925 deletions

2
README
View File

@ -40,7 +40,7 @@ GIT
git clone https://github.com/curl/curl.git
(you'll get a directory named curl created, filled with the source code)
(you will get a directory named curl created, filled with the source code)
SECURITY PROBLEMS

View File

@ -51,7 +51,7 @@ To download the very latest source from the Git server do this:
git clone https://github.com/curl/curl.git
(you'll get a directory named curl created, filled with the source code)
(you will get a directory named curl created, filled with the source code)
## Security problems

View File

@ -35,7 +35,7 @@ Check out the current award amounts at [https://hackerone.com/curl](https://hack
# Who is eligible for a reward?
Everyone and anyone who reports a security problem in a released curl version
that hasn't already been reported can ask for a bounty.
that has not already been reported can ask for a bounty.
Vulnerabilities in features that are off by default and documented as
experimental are not eligible for a reward.

View File

@ -12,7 +12,7 @@
## Where to report
If you can't fix a bug yourself and submit a fix for it, try to report an as
If you cannot fix a bug yourself and submit a fix for it, try to report an as
detailed report as possible to a curl mailing list to allow one of us to have
a go at a solution. You can optionally also submit your problem in [curl's
bug tracking system](https://github.com/curl/curl/issues).
@ -49,7 +49,7 @@
- your operating system's name and version number
- what version of curl you're using (`curl -V` is fine)
- what version of curl you are using (`curl -V` is fine)
- versions of the used libraries that libcurl is built to use
@ -67,7 +67,7 @@
If curl crashed, causing a core dump (in unix), there is hardly any use to
send that huge file to anyone of us. Unless we have an exact same system
setup as you, we can't do much with it. Instead, we ask you to get a stack
setup as you, we cannot do much with it. Instead, we ask you to get a stack
trace and send that (much smaller) output to us instead!
The address and how to subscribe to the mailing lists are detailed in the
@ -75,12 +75,12 @@
## libcurl problems
When you've written your own application with libcurl to perform transfers,
When you have written your own application with libcurl to perform transfers,
it is even more important to be specific and detailed when reporting bugs.
Tell us the libcurl version and your operating system. Tell us the name and
version of all relevant sub-components like for example the SSL library
you're using and what name resolving your libcurl uses. If you use SFTP or
you are using and what name resolving your libcurl uses. If you use SFTP or
SCP, the libssh2 version is relevant etc.
Showing us a real source code example repeating your problem is the best way
@ -104,7 +104,7 @@
But please do not assume that you can just lump over something to us and it
will then magically be fixed after some given time. Most often we need
feedback and help to understand what you've experienced and how to repeat a
feedback and help to understand what you have experienced and how to repeat a
problem. Then we may only be able to assist YOU to debug the problem and to
track down the proper fix.
@ -114,7 +114,7 @@
## How to get a stack trace
First, you must make sure that you compile all sources with `-g` and that you
don't 'strip' the final executable. Try to avoid optimizing the code as well,
do not 'strip' the final executable. Try to avoid optimizing the code as well,
remove `-O`, `-O2` etc from the compiler options.
Run the program until it cores.
@ -128,7 +128,7 @@
The list that is presented is the stack trace. If everything worked, it is
supposed to contain the chain of functions that were called when curl
crashed. Include the stack trace with your detailed bug report. It'll help a
crashed. Include the stack trace with your detailed bug report. it will help a
lot.
## Bugs in libcurl bindings
@ -148,12 +148,12 @@
The developers in the curl project do not have bandwidth or energy enough to
maintain several branches or to spend much time on hunting down problems in
old versions when chances are we already fixed them or at least that they've
old versions when chances are we already fixed them or at least that they have
changed nature and appearance in later versions.
When you experience a problem and want to report it, you really SHOULD
include the version number of the curl you're using when you experience the
issue. If that version number shows us that you're using an out-of-date curl,
include the version number of the curl you are using when you experience the
issue. If that version number shows us that you are using an out-of-date curl,
you should also try out a modern curl version to see if the problem persists
or how/if it has changed in appearance.
@ -162,9 +162,9 @@
experimental build or similar, to get this confirmed or not.
At times people insist that they cannot upgrade to a modern curl version, but
instead they "just want the bug fixed". That's fine, just don't count on us
spending many cycles on trying to identify which single commit, if that's
even possible, that at some point in the past fixed the problem you're now
instead they "just want the bug fixed". That is fine, just do not count on us
spending many cycles on trying to identify which single commit, if that is
even possible, that at some point in the past fixed the problem you are now
experiencing.
Security wise, it is almost always a bad idea to lag behind the current curl
@ -178,7 +178,7 @@
When a new issue is posted in the issue tracker or on the mailing list, the
team of developers first need to see the report. Maybe they took the day off,
maybe they're off in the woods hunting. Have patience. Allow at least a few
maybe they are off in the woods hunting. Have patience. Allow at least a few
days before expecting someone to have responded.
In the issue tracker you can expect that some labels will be set on the issue
@ -186,7 +186,7 @@
## First response
If your issue/bug report wasn't perfect at once (and few are), chances are
If your issue/bug report was not perfect at once (and few are), chances are
that someone will ask follow-up questions. Which version did you use? Which
options did you use? How often does the problem occur? How can we reproduce
this problem? Which protocols does it involve? Or perhaps much more specific
@ -199,19 +199,19 @@
## Not reproducible
For problems that we can't reproduce and can't understand even after having
For problems that we cannot reproduce and cannot understand even after having
gotten all the info we need and having studied the source code over again,
are really hard to solve so then we may require further work from you who
actually see or experience the problem.
## Unresponsive
If the problem haven't been understood or reproduced, and there's nobody
If the problem have not been understood or reproduced, and there's nobody
responding to follow-up questions or questions asking for clarifications or
for discussing possible ways to move forward with the task, we take that as a
strong suggestion that the bug is not important.
Unimportant issues will be closed as inactive sooner or later as they can't
Unimportant issues will be closed as inactive sooner or later as they cannot
be fixed. The inactivity period (waiting for responses) should not be shorter
than two weeks but may extend months.
@ -219,7 +219,7 @@
Bugs that are filed and are understood can unfortunately end up in the
"nobody cares enough about it to work on it" category. Such bugs are
perfectly valid problems that *should* get fixed but apparently aren't. We
perfectly valid problems that *should* get fixed but apparently are not. We
try to mark such bugs as `KNOWN_BUGS material` after a time of inactivity and
if no activity is noticed after yet some time those bugs are added to the
`KNOWN_BUGS` document and are closed in the issue tracker.
@ -227,7 +227,7 @@
## `KNOWN_BUGS`
This is a list of known bugs. Bugs we know exist and that have been pointed
out but that haven't yet been fixed. The reasons for why they haven't been
out but that have not yet been fixed. The reasons for why they have not been
fixed can involve anything really, but the primary reason is that nobody has
considered these problems to be important enough to spend the necessary time
and effort to have them fixed.
@ -239,14 +239,14 @@
## `TODO`
Issues that are filed or reported that aren't really bugs but more missing
Issues that are filed or reported that are not really bugs but more missing
features or ideas for future improvements and so on are marked as
'enhancement' or 'feature-request' and will be added to the `TODO` document
and the issues are closed. We don't keep TODO items open in the issue
and the issues are closed. We do not keep TODO items open in the issue
tracker.
The `TODO` document is full of ideas and suggestions of what we can add or
fix one day. You're always encouraged and free to grab one of those items and
fix one day. you are always encouraged and free to grab one of those items and
take up a discussion with the curl development team on how that could be
implemented or provided in the project so that you can work on ticking it odd
that document.
@ -258,7 +258,7 @@
The [issue and pull request trackers](https://github.com/curl/curl) only
holds "active" entries open (using a non-precise definition of what active
actually is, but they're at least not completely dead). Those that are
actually is, but they are at least not completely dead). Those that are
abandoned or in other ways dormant will be closed and sometimes added to
`TODO` and `KNOWN_BUGS` instead.

View File

@ -58,7 +58,7 @@ warnings are:
- `COPYRIGHT`: the file is missing a copyright statement!
- `CPPCOMMENTS`: `//` comment detected, that's not C89 compliant
- `CPPCOMMENTS`: `//` comment detected, that is not C89 compliant
- `DOBRACE`: only use one space after do before open brace
@ -120,7 +120,7 @@ warnings are:
- `TYPEDEFSTRUCT`: we frown upon (most) typedefed structs
- `UNUSEDIGNORE`: a checksrc inlined warning ignore was asked for but not used,
that's an ignore that should be removed or changed to get used.
that is an ignore that should be removed or changed to get used.
### Extended warnings
@ -132,7 +132,7 @@ warning per line like so: `enable <EXTENDEDWARNING>`
Currently there is one extended warning which can be enabled:
- `COPYRIGHTYEAR`: the current changeset hasn't updated the copyright year in
- `COPYRIGHTYEAR`: the current changeset has not updated the copyright year in
the source file
## Ignore certain warnings
@ -159,11 +159,11 @@ This will ignore the warning for overly long lines until it is re-enabled with:
/* !checksrc! enable LONGLINE */
If the enabling isn't performed before the end of the file, it will be enabled
If the enabling is not performed before the end of the file, it will be enabled
automatically for the next file.
You can also opt to ignore just N violations so that if you have a single long
line you just can't shorten and is agreed to be fine anyway:
line you just cannot shorten and is agreed to be fine anyway:
/* !checksrc! disable LONGLINE 1 */
@ -174,7 +174,7 @@ instances are ignored and nothing extra.
### Directory wide ignore patterns
This is a method we've transitioned away from. Use inline ignores as far as
This is a method we have transitioned away from. Use inline ignores as far as
possible.
Make a `checksrc.skip` file in the directory of the source code with the

View File

@ -43,10 +43,10 @@ as possible.
## Code style
Most code style nits are detected by checksrc but not all. Only leave remarks
on style deviation once checksrc doesn't find anymore.
on style deviation once checksrc does not find anymore.
Minor nits from fresh submitters can also be handled by the maintainer when
merging, in case it seems like the submitter isn't clear on what to do. We
merging, in case it seems like the submitter is not clear on what to do. We
want to make the process fun and exciting for new contributors.
## Encourage consistency
@ -105,15 +105,15 @@ updated documentation. Submitting that in a separate follow-up pull request is
not OK. A code review must also verify that the submitted documentation update
matches the code submission.
English isn't everyone's first language, be mindful of this and help the
English is not everyone's first language, be mindful of this and help the
submitter improve the text if it needs a rewrite to read better.
## Code shouldn't be hard to understand
## Code should not be hard to understand
Source code should be written to maximize readability and be easy to
understand.
## Functions shouldn't be large
## Functions should not be large
A single function should never be large as that makes it hard to follow and
understand all the exit points and state changes. Some existing functions in

View File

@ -23,9 +23,9 @@ will cause warnings will not be accepted as-is.
## Naming
Try using a non-confusing naming scheme for your new functions and variable
names. It doesn't necessarily have to mean that you should use the same as in
names. It does not necessarily have to mean that you should use the same as in
other places of the code, just that the names should be logical,
understandable and be named according to what they're used for. File-local
understandable and be named according to what they are used for. File-local
functions should be made static. We like lower case names.
See the [INTERNALS](https://curl.se/dev/internals.html#symbols) document on
@ -46,7 +46,7 @@ if(something_is_true) {
## Comments
Since we write C89 code, **//** comments are not allowed. They weren't
Since we write C89 code, **//** comments are not allowed. They were not
introduced in the C standard until C99. We use only __/* comments */__.
```c
@ -230,7 +230,7 @@ if(Curl_pipeline_wanted(handle->multi, CURLPIPE_HTTP1) &&
(handle->set.httpversion != CURL_HTTP_VERSION_1_0) &&
(handle->set.httpreq == HTTPREQ_GET ||
handle->set.httpreq == HTTPREQ_HEAD))
/* didn't ask for HTTP/1.0 and a GET or HEAD */
/* did not ask for HTTP/1.0 and a GET or HEAD */
return TRUE;
```

View File

@ -19,7 +19,7 @@ Before posting to one of the curl mailing lists, please read up on the
We also hang out on IRC in #curl on libera.chat
If you're at all interested in the code side of things, consider clicking
If you are at all interested in the code side of things, consider clicking
'watch' on the [curl repo on GitHub](https://github.com/curl/curl) to be
notified of pull requests and new issues posted there.
@ -30,9 +30,9 @@ the same license curl and libcurl is already using unless stated and agreed
otherwise.
If you add a larger piece of code, you can opt to make that file or set of
files to use a different license as long as they don't enforce any changes to
files to use a different license as long as they do not enforce any changes to
the rest of the package and they make sense. Such "separate parts" can not be
GPL licensed (as we don't want copyleft to affect users of libcurl) but they
GPL licensed (as we do not want copyleft to affect users of libcurl) but they
must use "GPL compatible" licenses (as we want to allow users to use libcurl
properly in GPL licensed environments).
@ -65,12 +65,12 @@ When writing C code, follow the
[CODE_STYLE](https://curl.se/dev/code-style.html) already established in
the project. Consistent style makes code easier to read and mistakes less
likely to happen. Run `make checksrc` before you submit anything, to make sure
you follow the basic style. That script doesn't verify everything, but if it
you follow the basic style. That script does not verify everything, but if it
complains you know you have work to do.
### Non-clobbering All Over
When you write new functionality or fix bugs, it is important that you don't
When you write new functionality or fix bugs, it is important that you do not
fiddle all over the source files and functions. Remember that it is likely
that other people have done changes in the same source files as you have and
possibly even in the same functions. If you bring completely new
@ -80,7 +80,7 @@ fix one bug at a time and send them as separate patches.
### Write Separate Changes
It is annoying when you get a huge patch from someone that is said to fix 511
odd problems, but discussions and opinions don't agree with 510 of them - or
odd problems, but discussions and opinions do not agree with 510 of them - or
509 of them were already fixed in a different way. Then the person merging
this change needs to extract the single interesting patch from somewhere
within the huge pile of source, and that creates a lot of extra work.
@ -114,13 +114,13 @@ generated from the nroff/ASCII versions.
### Test Cases
Since the introduction of the test suite, we can quickly verify that the main
features are working as they're supposed to. To maintain this situation and
features are working as they are supposed to. To maintain this situation and
improve it, all new features and functions that are added need to be tested
in the test suite. Every feature that is added should get at least one valid
test case that verifies that it works as documented. If every submitter also
posts a few test cases, it won't end up as a heavy burden on a single person!
posts a few test cases, it will not end up as a heavy burden on a single person!
If you don't have test cases or perhaps you have done something that is hard
If you do not have test cases or perhaps you have done something that is hard
to write tests for, do explain exactly how you have otherwise tested and
verified your changes.
@ -140,7 +140,7 @@ submitter of a change, you are the owner of that change until it has been merged
Respond on the list or on github about the change and answer questions and/or
fix nits/flaws. This is important. We will take lack of replies as a sign that
you're not anxious to get your patch accepted and we tend to simply drop such
you are not anxious to get your patch accepted and we tend to simply drop such
changes.
### About pull requests
@ -165,10 +165,10 @@ ways. Every pull request is verified for each of the following:
- ... the test suite still runs 100% fine
- ... the release tarball (the "dist") still works
- ... it builds fine in-tree as well as out-of-tree
- ... code coverage doesn't shrink drastically
- ... code coverage does not shrink drastically
If the pull-request fails one of these tests, it will show up as a red X and
you are expected to fix the problem. If you don't understand when the issue is
you are expected to fix the problem. If you do not understand when the issue is
or have other problems to fix the complaint, just ask and other project
members will likely be able to help out.
@ -205,7 +205,7 @@ commits so that we can review the full updated version more easily.
Make the patch against as recent source versions as possible.
If you've followed the tips in this document and your patch still hasn't been
If you have followed the tips in this document and your patch still has not been
incorporated or responded to after some weeks, consider resubmitting it to the
list or better yet: change it to a pull request.
@ -229,24 +229,24 @@ A short guide to how to write commit messages in the curl project.
The first line is a succinct description of the change:
- use the imperative, present tense: "change" not "changed" nor "changes"
- don't capitalize first letter
- do not capitalize first letter
- no dot (.) at the end
The `[area]` in the first line can be `http2`, `cookies`, `openssl` or
similar. There's no fixed list to select from but using the same "area" as
other related changes could make sense.
Don't forget to use commit --author="" if you commit someone else's work, and
Do not forget to use commit --author="" if you commit someone else's work, and
make sure that you have your own user and email setup correctly in git before
you commit
### Write Access to git Repository
If you are a frequent contributor, you may be given push access to the git
repository and then you'll be able to push your changes straight into the git
repository and then you will be able to push your changes straight into the git
repo instead of sending changes as pull requests or by mail as patches.
Just ask if this is what you'd want. You will be required to have posted
Just ask if this is what you would want. You will be required to have posted
several high quality patches first, before you can be granted push access.
### How To Make a Patch with git
@ -263,7 +263,7 @@ local repository:
As usual, group your commits so that you commit all changes at once that
constitute a logical change.
Once you have done all your commits and you're happy with what you see, you
Once you have done all your commits and you are happy with what you see, you
can make patches out of your changes that are suitable for mailing:
git format-patch remotes/origin/master

View File

@ -4,7 +4,7 @@ If any of these deprecated features is a cause for concern for you, please
email the
[curl-library mailing list](https://lists.haxx.se/listinfo/curl-library)
as soon as possible and explain to us why this is a problem for you and
how your use case can't be satisfied properly using a workaround.
how your use case cannot be satisfied properly using a workaround.
## Past removals

View File

@ -15,7 +15,7 @@ without using the dedicated dynbuf API.
void Curl_dyn_init(struct dynbuf *s, size_t toobig);
```
This inits a struct to use for dynbuf and it can't fail. The `toobig` value
This inits a struct to use for dynbuf and it cannot fail. The `toobig` value
**must** be set to the maximum size we allow this buffer instance to grow to.
The functions below will return `CURLE_OUT_OF_MEMORY` when hitting this limit.

170
docs/FAQ
View File

@ -17,7 +17,7 @@ FAQ
1.8 I have a problem who do I mail?
1.9 Where do I buy commercial support for curl?
1.10 How many are using curl?
1.11 Why don't you update ca-bundle.crt
1.11 Why do you not update ca-bundle.crt
1.12 I have a problem who can I chat with?
1.13 curl's ECCN number?
1.14 How do I submit my patch?
@ -31,7 +31,7 @@ FAQ
3. Usage Problems
3.1 curl: (1) SSL is disabled, https: not supported
3.2 How do I tell curl to resume a transfer?
3.3 Why doesn't my posting using -F work?
3.3 Why does my posting using -F not work?
3.4 How do I tell curl to run custom FTP commands?
3.5 How can I disable the Accept: */* header?
3.6 Does curl support ASP, XML, XHTML or HTML version Y?
@ -55,7 +55,7 @@ FAQ
4. Running Problems
4.2 Why do I get problems when I use & or % in the URL?
4.3 How can I use {, }, [ or ] to specify multiple URLs?
4.4 Why do I get downloaded data even though the web page doesn't exist?
4.4 Why do I get downloaded data even though the web page does not exist?
4.5 Why do I get return code XXX from a HTTP server?
4.5.1 "400 Bad Request"
4.5.2 "401 Unauthorized"
@ -66,18 +66,18 @@ FAQ
4.6 Can you tell me what error code 142 means?
4.7 How do I keep user names and passwords secret in curl command lines?
4.8 I found a bug!
4.9 curl can't authenticate to the server that requires NTLM?
4.10 My HTTP request using HEAD, PUT or DELETE doesn't work!
4.9 curl cannot authenticate to the server that requires NTLM?
4.10 My HTTP request using HEAD, PUT or DELETE does not work!
4.11 Why do my HTTP range requests return the full document?
4.12 Why do I get "certificate verify failed" ?
4.13 Why is curl -R on Windows one hour off?
4.14 Redirects work in browser but not with curl!
4.15 FTPS doesn't work
4.15 FTPS does not work
4.16 My HTTP POST or PUT requests are slow!
4.17 Non-functional connect timeouts on Windows
4.18 file:// URLs containing drive letters (Windows, NetWare)
4.19 Why doesn't curl return an error when the network cable is unplugged?
4.20 curl doesn't return error for HTTP non-200 responses!
4.19 Why does not curl return an error when the network cable is unplugged?
4.20 curl does not return error for HTTP non-200 responses!
5. libcurl Issues
5.1 Is libcurl thread-safe?
@ -248,7 +248,7 @@ FAQ
Project cURL is entirely free and open. We do this voluntarily, mostly in
our spare time. Companies may pay individual developers to work on curl,
but that's up to each company and developer. This is not controlled by nor
but that is up to each company and developer. This is not controlled by nor
supervised in any way by the curl project.
We get help from companies. Haxx provides website, bandwidth, mailing lists
@ -313,22 +313,22 @@ FAQ
It is impossible to tell.
We don't know how many users that knowingly have installed and use curl.
We do not know how many users that knowingly have installed and use curl.
We don't know how many users that use curl without knowing that they are in
We do not know how many users that use curl without knowing that they are in
fact using it.
We don't know how many users that downloaded or installed curl and then
We do not know how many users that downloaded or installed curl and then
never use it.
In 2020, we estimate that curl runs in roughly ten billion installations
world wide.
1.11 Why don't you update ca-bundle.crt
1.11 Why do you not update ca-bundle.crt
In the cURL project we've decided not to attempt to keep this file updated
In the cURL project we have decided not to attempt to keep this file updated
(or even present) since deciding what to add to a ca cert bundle is an
undertaking we've not been ready to accept, and the one we can get from
undertaking we have not been ready to accept, and the one we can get from
Mozilla is perfectly fine so there's no need to duplicate that work.
Today, with many services performed over HTTPS, every operating system
@ -344,7 +344,7 @@ FAQ
1.12 I have a problem who can I chat with?
There's a bunch of friendly people hanging out in the #curl channel on the
IRC network libera.chat. If you're polite and nice, chances are good that
IRC network libera.chat. If you are polite and nice, chances are good that
you can get -- or provide -- help instantly.
1.13 curl's ECCN number?
@ -372,10 +372,10 @@ FAQ
1.14 How do I submit my patch?
We strongly encourage you to submit changes and improvements directly as
"pull requests" on github: https://github.com/curl/curl/pulls
"pull requests" on GitHub: https://github.com/curl/curl/pulls
If you for any reason can't or won't deal with github, send your patch to
the curl-library mailing list. We're many subscribers there and there are
If you for any reason cannot or will not deal with GitHub, send your patch to
the curl-library mailing list. We are many subscribers there and there are
lots of people who can review patches, comment on them and "receive" them
properly.
@ -405,7 +405,7 @@ FAQ
configure checks for.
The reason why static libraries is much harder to deal with is that for them
we don't get any help but the script itself must know or check what more
we do not get any help but the script itself must know or check what more
libraries that are needed (with shared libraries, that dependency "chain" is
handled automatically). This is a error-prone process and one that also
tends to vary over time depending on the release versions of the involved
@ -437,20 +437,20 @@ FAQ
3.1 curl: (1) SSL is disabled, https: not supported
If you get this output when trying to get anything from a https:// server,
it means that the instance of curl/libcurl that you're using was built
it means that the instance of curl/libcurl that you are using was built
without support for this protocol.
This could've happened if the configure script that was run at build time
couldn't find all libs and include files curl requires for SSL to work. If
This could have happened if the configure script that was run at build time
could not find all libs and include files curl requires for SSL to work. If
the configure script fails to find them, curl is simply built without SSL
support.
To get the https:// support into a curl that was previously built but that
reports that https:// is not supported, you should dig through the document
and logs and check out why the configure script doesn't find the SSL libs
and logs and check out why the configure script does not find the SSL libs
and/or include files.
Also, check out the other paragraph in this FAQ labeled "configure doesn't
Also, check out the other paragraph in this FAQ labeled "configure does not
find OpenSSL even when it is installed".
3.2 How do I tell curl to resume a transfer?
@ -458,19 +458,19 @@ FAQ
curl supports resumed transfers both ways on both FTP and HTTP.
Try the -C option.
3.3 Why doesn't my posting using -F work?
3.3 Why does my posting using -F not work?
You can't arbitrarily use -F or -d, the choice between -F or -d depends on
You cannot arbitrarily use -F or -d, the choice between -F or -d depends on
the HTTP operation you need curl to do and what the web server that will
receive your post expects.
If the form you're trying to submit uses the type 'multipart/form-data',
If the form you are trying to submit uses the type 'multipart/form-data',
then and only then you must use the -F type. In all the most common cases,
you should use -d which then causes a posting with the type
'application/x-www-form-urlencoded'.
This is described in some detail in the MANUAL and TheArtOfHttpScripting
documents, and if you don't understand it the first time, read it again
documents, and if you do not understand it the first time, read it again
before you post questions about this to the mailing list. Also, try reading
through the mailing list archives for old postings and questions regarding
this.
@ -480,7 +480,7 @@ FAQ
You can tell curl to perform optional commands both before and/or after a
file transfer. Study the -Q/--quote option.
Since curl is used for file transfers, you don't normally use curl to
Since curl is used for file transfers, you do not normally use curl to
perform FTP commands without transferring anything. Therefore you must
always specify a URL to transfer to/from even when doing custom FTP
commands, or use -I which implies the "no body" option sent to libcurl.
@ -493,9 +493,9 @@ FAQ
3.6 Does curl support ASP, XML, XHTML or HTML version Y?
To curl, all contents are alike. It doesn't matter how the page was
To curl, all contents are alike. It does not matter how the page was
generated. It may be ASP, PHP, Perl, shell-script, SSI or plain HTML
files. There's no difference to curl and it doesn't even know what kind of
files. There's no difference to curl and it does not even know what kind of
language that generated the page.
See also item 3.14 regarding javascript.
@ -515,7 +515,7 @@ FAQ
3.8 How do I tell curl to follow HTTP redirects?
curl does not follow so-called redirects by default. The Location: header
that informs the client about this is only interpreted if you're using the
that informs the client about this is only interpreted if you are using the
-L/--location option. As in:
curl -L http://redirector.com
@ -534,7 +534,7 @@ FAQ
All the various bindings to libcurl are made by other projects and people,
outside of the cURL project. The cURL project itself only produces libcurl
with its plain C API. If you don't find anywhere else to ask you can ask
with its plain C API. If you do not find anywhere else to ask you can ask
about bindings on the curl-library list too, but be prepared that people on
that list may not know anything about bindings.
@ -554,7 +554,7 @@ FAQ
XML-RPC are all such ones. You can use -X to set custom requests and -H to
set custom headers (or replace internally generated ones).
Using libcurl is of course just as good and you'd just use the proper
Using libcurl is of course just as good and you would just use the proper
library options to do the same.
3.11 How do I POST with a different Content-Type?
@ -568,7 +568,7 @@ FAQ
Because when you use a HTTP proxy, the protocol spoken on the network will
be HTTP, even if you specify a FTP URL. This effectively means that you
normally can't use FTP-specific features such as FTP upload and FTP quote
normally cannot use FTP-specific features such as FTP upload and FTP quote
etc.
There is one exception to this rule, and that is if you can "tunnel through"
@ -590,7 +590,7 @@ FAQ
Exactly what kind of quotes and how to do this is entirely up to the shell
or command line interpreter that you are using. For most unix shells, you
can more or less pick either single (') or double (") quotes. For
Windows/DOS prompts I believe you're forced to use double (") quotes.
Windows/DOS prompts I believe you are forced to use double (") quotes.
Please study the documentation for your particular environment. Examples in
the curl docs will use a mix of both of these as shown above. You must
@ -608,8 +608,8 @@ FAQ
.pac files are a netscape invention and are sometimes used by organizations
to allow them to differentiate which proxies to use. The .pac contents is
just a Javascript program that gets invoked by the browser and that returns
the name of the proxy to connect to. Since curl doesn't support Javascript,
it can't support .pac proxy configuration either.
the name of the proxy to connect to. Since curl does not support Javascript,
it cannot support .pac proxy configuration either.
Some workarounds usually suggested to overcome this Javascript dependency:
@ -641,7 +641,7 @@ FAQ
The server you communicate with may require that you can provide this in
order to prove that you actually are who you claim to be. If the server
doesn't require this, you don't need a client certificate.
does not require this, you do not need a client certificate.
A client certificate is always used together with a private key, and the
private key has a pass phrase that protects it.
@ -690,7 +690,7 @@ FAQ
3.19 How do I get HTTP from a host using a specific IP address?
For example, you may be trying out a website installation that isn't yet in
For example, you may be trying out a website installation that is not yet in
the DNS. Or you have a site using multiple IP addresses for a given host
name and you want to address a specific one out of the set.
@ -708,7 +708,7 @@ FAQ
3.20 How to SFTP from my user's home directory?
Contrary to how FTP works, SFTP and SCP URLs specify the exact directory to
work with. It means that if you don't specify that you want the user's home
work with. It means that if you do not specify that you want the user's home
directory, you get the actual root directory.
To specify a file in your user's home directory, you need to use the correct
@ -724,7 +724,7 @@ FAQ
When passing on a URL to curl to use, it may respond that the particular
protocol is not supported or disabled. The particular way this error message
is phrased is because curl doesn't make a distinction internally of whether
is phrased is because curl does not make a distinction internally of whether
a particular protocol is not supported (i.e. never got any code added that
knows how to speak that protocol) or if it was explicitly disabled. curl can
be built to only support a given set of protocols, and the rest would then
@ -743,7 +743,7 @@ FAQ
"curl http://example.com" it will use GET. If you use -d or -F curl will use
POST, -I will cause a HEAD and -T will make it a PUT.
If for whatever reason you're not happy with these default choices that curl
If for whatever reason you are not happy with these default choices that curl
does for you, you can override those request methods by specifying -X
[WHATEVER]. This way you can for example send a DELETE by doing "curl -X
DELETE [URL]".
@ -754,7 +754,7 @@ FAQ
request-body in a GET request with something like "curl -X GET -d data
[URL]"
Note that -X doesn't actually change curl's behavior as it only modifies the
Note that -X does not actually change curl's behavior as it only modifies the
actual string sent in the request, but that may of course trigger a
different set of events.
@ -799,15 +799,15 @@ FAQ
curl -g 'www.site.com/weirdname[].html'
4.4 Why do I get downloaded data even though the web page doesn't exist?
4.4 Why do I get downloaded data even though the web page does not exist?
curl asks remote servers for the page you specify. If the page doesn't exist
curl asks remote servers for the page you specify. If the page does not exist
at the server, the HTTP protocol defines how the server should respond and
that means that headers and a "page" will be returned. That's simply how
that means that headers and a "page" will be returned. That is simply how
HTTP works.
By using the --fail option you can tell curl explicitly to not get any data
if the HTTP return code doesn't say success.
if the HTTP return code does not say success.
4.5 Why do I get return code XXX from a HTTP server?
@ -865,11 +865,11 @@ FAQ
This problem has two sides:
The first part is to avoid having clear-text passwords in the command line
so that they don't appear in 'ps' outputs and similar. That is easily
so that they do not appear in 'ps' outputs and similar. That is easily
avoided by using the "-K" option to tell curl to read parameters from a file
or stdin to which you can pass the secret info. curl itself will also
attempt to "hide" the given password by blanking out the option - this
doesn't work on all platforms.
does not work on all platforms.
To keep the passwords in your account secret from the rest of the world is
not a task that curl addresses. You could of course encrypt them somehow to
@ -887,14 +887,14 @@ FAQ
It is not a bug if the behavior is documented. Read the docs first.
Especially check out the KNOWN_BUGS file, it may be a documented bug!
If it is a problem with a binary you've downloaded or a package for your
If it is a problem with a binary you have downloaded or a package for your
particular platform, try contacting the person who built the package/archive
you have.
If there is a bug, read the BUGS document first. Then report it as described
in there.
4.9 curl can't authenticate to the server that requires NTLM?
4.9 curl cannot authenticate to the server that requires NTLM?
NTLM support requires OpenSSL, GnuTLS, mbedTLS, NSS, Secure Transport, or
Microsoft Windows libraries at build-time to provide this functionality.
@ -902,7 +902,7 @@ FAQ
NTLM is a Microsoft proprietary protocol. Proprietary formats are evil. You
should not use such ones.
4.10 My HTTP request using HEAD, PUT or DELETE doesn't work!
4.10 My HTTP request using HEAD, PUT or DELETE does not work!
Many web servers allow or demand that the administrator configures the
server properly for these requests to work on the web server.
@ -910,7 +910,7 @@ FAQ
Some servers seem to support HEAD only on certain kinds of URLs.
To fully grasp this, try the documentation for the particular server
software you're trying to interact with. This is not anything curl can do
software you are trying to interact with. This is not anything curl can do
anything about.
4.11 Why do my HTTP range requests return the full document?
@ -921,7 +921,7 @@ FAQ
4.12 Why do I get "certificate verify failed" ?
When you invoke curl and get an error 60 error back it means that curl
couldn't verify that the server's certificate was good. curl verifies the
could not verify that the server's certificate was good. curl verifies the
certificate using the CA cert bundle and verifying for which names the
certificate has been granted.
@ -938,7 +938,7 @@ FAQ
At times, you find that the verification works in your favorite browser but
fails in curl. When this happens, the reason is usually that the server
sends an incomplete cert chain. The server is mandated to send all
"intermediate certificates" but doesn't. This typically works with browsers
"intermediate certificates" but does not. This typically works with browsers
anyway since they A) cache such certs and B) supports AIA which downloads
such missing certificates on demand. This is a server misconfiguration. A
good way to figure out if this is the case it to use the SSL Labs server
@ -971,7 +971,7 @@ FAQ
manually figure out what the page is set to do, or write a script that parses
the results and fetches the new URL.
4.15 FTPS doesn't work
4.15 FTPS does not work
curl supports FTPS (sometimes known as FTP-SSL) both implicit and explicit
mode.
@ -993,8 +993,8 @@ FAQ
before having to send any data. This is useful in authentication
cases and others.
However, many servers don't implement the Expect: stuff properly and if the
server doesn't respond (positively) within 1 second libcurl will continue
However, many servers do not implement the Expect: stuff properly and if the
server does not respond (positively) within 1 second libcurl will continue
and send off the data anyway.
You can disable libcurl's use of the Expect: header the same way you disable
@ -1014,7 +1014,7 @@ FAQ
Also, even on non-Windows systems there may run a firewall or anti-virus
software or similar that accepts the connection but does not actually do
anything else. This will make (lib)curl to consider the connection connected
and thus the connect timeout won't trigger.
and thus the connect timeout will not trigger.
4.18 file:// URLs containing drive letters (Windows, NetWare)
@ -1023,7 +1023,7 @@ FAQ
file://D:/blah.txt
You'll find that even if D:\blah.txt does exist, curl returns a 'file
you will find that even if D:\blah.txt does exist, curl returns a 'file
not found' error.
According to RFC 1738 (https://www.ietf.org/rfc/rfc1738.txt),
@ -1031,7 +1031,7 @@ FAQ
most implementations. In the above example, 'D:' is treated as the
host component, and is taken away. Thus, curl tries to open '/blah.txt'.
If your system is installed to drive C:, that will resolve to 'C:\blah.txt',
and if that doesn't exist you will get the not found error.
and if that does not exist you will get the not found error.
To fix this problem, use file:// URLs with *three* leading slashes:
@ -1044,11 +1044,11 @@ FAQ
In either case, curl should now be looking for the correct file.
4.19 Why doesn't curl return an error when the network cable is unplugged?
4.19 Why does not curl return an error when the network cable is unplugged?
Unplugging a cable is not an error situation. The TCP/IP protocol stack
was designed to be fault tolerant, so even though there may be a physical
break somewhere the connection shouldn't be affected, just possibly
break somewhere the connection should not be affected, just possibly
delayed. Eventually, the physical break will be fixed or the data will be
re-routed around the physical problem through another path.
@ -1061,9 +1061,9 @@ FAQ
connection to make sure it is still available to send data. That should
reliably detect any TCP/IP network failure.
But even that won't detect the network going down before the TCP/IP
But even that will not detect the network going down before the TCP/IP
connection is established (e.g. during a DNS lookup) or using protocols that
don't use TCP. To handle those situations, curl offers a number of timeouts
do not use TCP. To handle those situations, curl offers a number of timeouts
on its own. --speed-limit/--speed-time will abort if the data transfer rate
falls too low, and --connect-timeout and --max-time can be used to put an
overall timeout on the connection phase or the entire transfer.
@ -1074,11 +1074,11 @@ FAQ
by having the application monitor the network connection on its own using an
OS-specific mechanism, then signaling libcurl to abort (see also item 5.13).
4.20 curl doesn't return error for HTTP non-200 responses!
4.20 curl does not return error for HTTP non-200 responses!
Correct. Unless you use -f (--fail).
When doing HTTP transfers, curl will perform exactly what you're asking it
When doing HTTP transfers, curl will perform exactly what you are asking it
to do and if successful it will not return an error. You can use curl to
test your web server's "file not found" page (that gets 404 back), you can
use it to check your authentication protected web pages (that gets a 401
@ -1087,7 +1087,7 @@ FAQ
The specific HTTP response code does not constitute a problem or error for
curl. It simply sends and delivers HTTP as you asked and if that worked,
everything is fine and dandy. The response code is generally providing more
higher level error information that curl doesn't care about. The error was
higher level error information that curl does not care about. The error was
not in the HTTP transfer.
If you want your command line to treat error codes in the 400 and up range
@ -1153,7 +1153,7 @@ FAQ
libcurl has excellent support for transferring multiple files. You should
just repeatedly set new URLs with curl_easy_setopt() and then transfer it
with curl_easy_perform(). The handle you get from curl_easy_init() is not
only reusable, but you're even encouraged to reuse it if you can, as that
only reusable, but you are even encouraged to reuse it if you can, as that
will enable libcurl to use persistent connections.
5.4 Does libcurl do Winsock initialization on win32 systems?
@ -1195,12 +1195,12 @@ FAQ
When building an application that uses the static libcurl library, you must
add -DCURL_STATICLIB to your CFLAGS. Otherwise the linker will look for
dynamic import symbols. If you're using Visual Studio, you need to instead
dynamic import symbols. If you are using Visual Studio, you need to instead
add CURL_STATICLIB in the "Preprocessor Definitions" section.
If you get linker error like "unknown symbol __imp__curl_easy_init ..." you
have linked against the wrong (static) library. If you want to use the
libcurl.dll and import lib, you don't need any extra CFLAGS, but use one of
libcurl.dll and import lib, you do not need any extra CFLAGS, but use one of
the import libraries below. These are the libraries produced by the various
lib/Makefile.* files:
@ -1214,7 +1214,7 @@ FAQ
5.8 libcurl.so.X: open failed: No such file or directory
This is an error message you might get when you try to run a program linked
with a shared version of libcurl and your run-time linker (ld.so) couldn't
with a shared version of libcurl and your run-time linker (ld.so) could not
find the shared library named libcurl.so.X. (Where X is the number of the
current libcurl ABI, typically 3 or 4).
@ -1228,7 +1228,7 @@ FAQ
* Set an environment variable (LD_LIBRARY_PATH for example) where ld.so
should check for libs
* Adjust the system's config to check for libs in the directory where you've
* Adjust the system's config to check for libs in the directory where you have
put the dir (like Linux's /etc/ld.so.conf)
'man ld.so' and 'man ld' will tell you more details
@ -1297,13 +1297,13 @@ FAQ
can do this with include the progress callback, the read callback and the
write callback.
If you're using the multi interface, you can also stop a transfer by
If you are using the multi interface, you can also stop a transfer by
removing the particular easy handle from the multi stack at any moment you
think the transfer is done or when you wish to abort the transfer.
5.14 Using C++ non-static functions for callbacks?
libcurl is a C library, it doesn't know anything about C++ member functions.
libcurl is a C library, it does not know anything about C++ member functions.
You can overcome this "limitation" with relative ease using a static
member function that is passed a pointer to the class:
@ -1331,9 +1331,9 @@ FAQ
a symlink etc. If the FTP server supports the MLSD command then it will
return data in a machine-readable format that can be parsed for type. The
types are specified by RFC3659 section 7.5.1. If MLSD is not supported then
you have to work with what you're given. The LIST output format is entirely
at the server's own liking and the NLST output doesn't reveal any types and
in many cases doesn't even include all the directory entries. Also, both LIST
you have to work with what you are given. The LIST output format is entirely
at the server's own liking and the NLST output does not reveal any types and
in many cases does not even include all the directory entries. Also, both LIST
and NLST tend to hide unix-style hidden files (those that start with a dot)
by default so you need to do "LIST -a" or similar to see them.
@ -1386,7 +1386,7 @@ FAQ
but still in the same single thread.
libcurl will potentially internally use threads for name resolving, if it
was built to work like that, but in those cases it'll create the child
was built to work like that, but in those cases it will create the child
threads by itself and they will only be used and then killed internally by
libcurl and never exposed to the outside.
@ -1426,7 +1426,7 @@ FAQ
Yes!
The LGPL license doesn't clash with other licenses.
The LGPL license does not clash with other licenses.
6.5 Can I modify curl/libcurl for my program and keep the changes secret?
@ -1507,18 +1507,18 @@ FAQ
8.1 Why does curl use C89?
As with everything in curl, there's a history and we keep using what we've
As with everything in curl, there's a history and we keep using what we have
used before until someone brings up the subject and argues for and works on
changing it.
We started out using C89 in the 1990s because that was the only way to write
a truly portable C program and have it run as widely as possible. C89 was for
a long time even necessary to make things work on otherwise considered modern
platforms such as Windows. Today, we don't really know how many users that
platforms such as Windows. Today, we do not really know how many users that
still require the use of a C89 compiler.
We will continue to use C89 for as long as nobody brings up a strong enough
reason for us to change our minds. The core developers of the project don't
reason for us to change our minds. The core developers of the project do not
feel restricted by this and we are not convinced that going C99 will offer us
enough of a benefit to warrant the risk of cutting off a share of users.

View File

@ -98,7 +98,7 @@ on maintaining curl is considered a hero, for all time hereafter.
## Security team members
We have a security team. That's the team of people who are subscribed to the
We have a security team. That is the team of people who are subscribed to the
curl-security mailing list; the receivers of security reports from users and
developers. This list of people will vary over time but should be skilled
developers familiar with the curl project.
@ -123,7 +123,7 @@ primary curl contact with Fastly.
## BDFL
That's Daniel.
That is Daniel.
# Maintainers
@ -152,10 +152,10 @@ within the area of personal expertise and experience.
### Merge advice
When you're merging patches/PRs...
When you are merging patches/PRs...
- make sure the commit messages follow our template
- squash patch sets into a few logical commits even if the PR didn't, if
- squash patch sets into a few logical commits even if the PR did not, if
necessary
- avoid the "merge" button on GitHub, do it "manually" instead to get full
control and full audit trail (github leaves out you as "Committer:")

View File

@ -18,10 +18,10 @@ down and report the bug. Or make your first pull request with a fix for that.
## Smaller tasks
Some projects mark small issues as "beginner friendly", "bite-sized" or
similar. We don't do that in curl since such issues never linger around long
similar. We do not do that in curl since such issues never linger around long
enough. Simple issues get handled fast.
If you're looking for a smaller or simpler task in the project to help out
If you are looking for a smaller or simpler task in the project to help out
with as an entry-point into the project, perhaps because you are a newcomer or
even maybe not a terribly experienced developer, here's our advice:
@ -43,7 +43,7 @@ one that piques your interest.
## Work on known bugs
Some bugs are known and haven't yet received attention and work enough to get
Some bugs are known and have not yet received attention and work enough to get
fixed. We collect such known existing flaws in the
[KNOWN_BUGS](https://curl.se/docs/knownbugs.html) page. Many of them link
to the original bug report with some additional details, but some may also
@ -56,7 +56,7 @@ On the [autobuilds page](https://curl.se/dev/builds.html) we show a
collection of test results from the automatic curl build and tests that are
performed by volunteers. Fixing compiler warnings and errors shown there is
something we value greatly. Also, if you own or run systems or architectures
that aren't already tested in the autobuilds, we also appreciate more
that are not already tested in the autobuilds, we also appreciate more
volunteers running builds automatically to help us keep curl portable.
## TODO items
@ -64,7 +64,7 @@ volunteers running builds automatically to help us keep curl portable.
Ideas for features and functions that we have considered worthwhile to
implement and provide are kept in the
[TODO](https://curl.se/docs/todo.html) file. Some of the ideas are
rough. Some are well thought out. Some probably aren't really suitable
rough. Some are well thought out. Some probably are not really suitable
anymore.
Before you invest a lot of time on a TODO item, do bring it up for discussion
@ -83,5 +83,5 @@ the specific implementation. Either way is fine.
We offer [guidelines](https://curl.se/dev/contribute.html) that are
suitable to be familiar with before you decide to contribute to curl. If
you're used to open source development, you'll probably not find many
you are used to open source development, you will probably not find many
surprises in there.

View File

@ -60,7 +60,7 @@ SSL support was added, powered by the SSLeay library.
August: first announcement of curl on freshmeat.net.
October: with the curl 4.9 release and the introduction of cookie support,
curl was no longer released under the GPL license. Now we're at 4000 lines of
curl was no longer released under the GPL license. Now we are at 4000 lines of
code, we switched over to the MPL license to restrict the effects of
"copyleft".

View File

@ -8,7 +8,7 @@
Cookies are either "session cookies" which typically are forgotten when the
session is over which is often translated to equal when browser quits, or
the cookies aren't session cookies they have expiration dates after which
the cookies are not session cookies they have expiration dates after which
the client will throw them away.
Cookies are set to the client with the Set-Cookie: header and are sent to
@ -74,7 +74,7 @@
`-b, --cookie`
tell curl a file to read cookies from and start the cookie engine, or if it
isn't a file it will pass on the given string. -b name=var works and so does
is not a file it will pass on the given string. -b name=var works and so does
-b cookiefile.
`-j, --junk-session-cookies`

View File

@ -78,7 +78,7 @@ attempt to re-use existing HTTP/2 connections and just add a new stream over
that when doing subsequent parallel requests.
While libcurl sets up a connection to a HTTP server there is a period during
which it doesn't know if it can pipeline or do multiplexing and if you add new
which it does not know if it can pipeline or do multiplexing and if you add new
transfers in that period, libcurl will default to start new connections for
those transfers. With the new option `CURLOPT_PIPEWAIT` (added in 7.43.0), you
can ask that a transfer should rather wait and see in case there's a
@ -105,7 +105,7 @@ Since 7.47.0, the curl tool enables HTTP/2 by default for HTTPS connections.
curl tool limitations
---------------------
The command line tool doesn't support HTTP/2 server push. It supports
The command line tool does not support HTTP/2 server push. It supports
multiplexing when the parallel transfer option is used.
HTTP Alternative Services

View File

@ -13,7 +13,7 @@ and libcurl.
## QUIC libraries
QUIC libraries we're experimenting with:
QUIC libraries we are experimenting with:
[ngtcp2](https://github.com/ngtcp2/ngtcp2)

View File

@ -46,7 +46,7 @@ over the wire with Hyper.
## Limitations
The hyper backend doesn't support
The hyper backend does not support
- `CURLOPT_IGNORE_CONTENT_LENGTH`
- `--raw` and disabling `CURLOPT_HTTP_TRANSFER_DECODING`

View File

@ -26,14 +26,14 @@ Current flaws in the curl CMake build
- Builds libcurl without large file support
- Does not support all SSL libraries (only OpenSSL, Schannel,
Secure Transport, and mbed TLS, NSS, WolfSSL)
- Doesn't allow different resolver backends (no c-ares build support)
- Does not allow different resolver backends (no c-ares build support)
- No RTMP support built
- Doesn't allow build curl and libcurl debug enabled
- Doesn't allow a custom CA bundle path
- Doesn't allow you to disable specific protocols from the build
- Doesn't find or use krb4 or GSS
- Rebuilds test files too eagerly, but still can't run the tests
- Doesn't detect the correct strerror_r flavor when cross-compiling (issue #1123)
- Does not allow build curl and libcurl debug enabled
- Does not allow a custom CA bundle path
- Does not allow you to disable specific protocols from the build
- Does not find or use krb4 or GSS
- Rebuilds test files too eagerly, but still cannot run the tests
- Does not detect the correct strerror_r flavor when cross-compiling (issue #1123)
Command Line CMake

View File

@ -27,7 +27,7 @@ proceed.
# Unix
A normal Unix installation is made in three or four steps (after you've
A normal Unix installation is made in three or four steps (after you have
unpacked the source archive):
./configure --with-openssl [--with-gnutls --with-wolfssl]
@ -58,7 +58,7 @@ your own home directory:
The configure script always tries to find a working SSL library unless
explicitly told not to. If you have OpenSSL installed in the default search
path for your compiler/linker, you don't need to do anything special. If you
path for your compiler/linker, you do not need to do anything special. If you
have OpenSSL installed in `/usr/local/ssl`, you can run configure like:
./configure --with-openssl
@ -85,7 +85,7 @@ work:
CPPFLAGS="-I/path/to/ssl/include" LDFLAGS="-L/path/to/ssl/lib" ./configure
If you have shared SSL libs installed in a directory where your run-time
linker doesn't find them (which usually causes configure failures), you can
linker does not find them (which usually causes configure failures), you can
provide this option to gcc to set a hard-coded path to the run-time linker:
LDFLAGS=-Wl,-R/usr/local/ssl/lib ./configure --with-openssl
@ -102,7 +102,7 @@ an option like:
./configure --disable-thread
If you're a curl developer and use gcc, you might want to enable more debug
If you are a curl developer and use gcc, you might want to enable more debug
options with the `--enable-debug` option.
curl can be built to use a whole range of libraries to provide various useful
@ -197,7 +197,7 @@ If you want to enable LDAPS support then set LDAPS=1.
Almost identical to the unix installation. Run the configure script in the
curl source tree root with `sh configure`. Make sure you have the `sh`
executable in `/bin/` or you'll see the configure fail toward the end.
executable in `/bin/` or you will see the configure fail toward the end.
Run `make`
@ -355,7 +355,7 @@ to adjust those variables accordingly. After that you can build curl like this:
./configure --host aarch64-linux-android --with-pic --disable-shared
Note that this won't give you SSL/TLS support. If you need SSL/TLS, you have
Note that this will not give you SSL/TLS support. If you need SSL/TLS, you have
to build curl against a SSL/TLS layer, e.g. OpenSSL, because it's impossible for
curl to access Android's native SSL/TLS layer. To build curl for Android using
OpenSSL, follow the OpenSSL build instructions and then install `libssl.a` and
@ -366,7 +366,7 @@ OpenSSL like this:
./configure --host aarch64-linux-android --with-pic --disable-shared --with-openssl="$TOOLCHAIN/sysroot/usr"
Note, however, that you must target at least Android M (API level 23) or `configure`
won't be able to detect OpenSSL since `stderr` (and the like) weren't defined
will not be able to detect OpenSSL since `stderr` (and the like) were not defined
before Android M.
# IBM i
@ -387,16 +387,16 @@ they affect both environments.
## Multithreading notes
By default, jobs in IBM i won't start with threading enabled. (Exceptions
By default, jobs in IBM i will not start with threading enabled. (Exceptions
include interactive PASE sessions started by `QP2TERM` or SSH.) If you use
curl in an environment without threading when options like async DNS were
enabled, you'll messages like:
enabled, you will messages like:
```
getaddrinfo() thread failed to start
```
Don't panic! curl and your program aren't broken. You can fix this by:
Do not panic! curl and your program are not broken. You can fix this by:
- Set the environment variable `QIBM_MULTI_THREADED` to `Y` before starting
your program. This can be done at whatever scope you feel is appropriate.
@ -514,7 +514,7 @@ line. Following is a list of appropriate key words:
This is a probably incomplete list of known CPU architectures and operating
systems that curl has been compiled for. If you know a system curl compiles
and runs on, that isn't listed, please let us know!
and runs on, that is not listed, please let us know!
## 85 Operating Systems

View File

@ -62,7 +62,7 @@ git
===
All changes to the sources are committed to the git repository as soon as
they're somewhat verified to work. Changes shall be committed as independently
they are somewhat verified to work. Changes shall be committed as independently
as possible so that individual changes can be easily spotted and tracked
afterwards.
@ -74,7 +74,7 @@ Portability
===========
We write curl and libcurl to compile with C89 compilers. On 32-bit and up
machines. Most of libcurl assumes more or less POSIX compliance but that's
machines. Most of libcurl assumes more or less POSIX compliance but that is
not a requirement.
We write libcurl to build and work with lots of third party tools, and we
@ -103,7 +103,7 @@ Operating Systems
-----------------
On systems where configure runs, we aim at working on them all - if they have
a suitable C compiler. On systems that don't run configure, we strive to keep
a suitable C compiler. On systems that do not run configure, we strive to keep
curl running correctly on:
- Windows 98
@ -143,7 +143,7 @@ Windows vs Unix
2. Windows requires a couple of init calls for the socket stuff.
That's taken care of by the `curl_global_init()` call, but if other libs
That is taken care of by the `curl_global_init()` call, but if other libs
also do it etc there might be reasons for applications to alter that
behavior.
@ -162,13 +162,13 @@ Windows vs Unix
Inside the source code, We make an effort to avoid `#ifdef [Your OS]`. All
conditionals that deal with features *should* instead be in the format
`#ifdef HAVE_THAT_WEIRD_FUNCTION`. Since Windows can't run configure scripts,
`#ifdef HAVE_THAT_WEIRD_FUNCTION`. Since Windows cannot run configure scripts,
we maintain a `curl_config-win32.h` file in lib directory that is supposed to
look exactly like a `curl_config.h` file would have looked like on a Windows
machine!
Generally speaking: always remember that this will be compiled on dozens of
operating systems. Don't walk on the edge!
operating systems. Do not walk on the edge!
<a name="Library"></a>
Library
@ -237,7 +237,7 @@ multi_do()
The functions are named after the protocols they handle.
The protocol-specific functions of course deal with protocol-specific
negotiations and setup. When they're ready to start the actual file
negotiations and setup. When they are ready to start the actual file
transfer they call the `Curl_setup_transfer()` function (in
`lib/transfer.c`) to setup the transfer and returns.
@ -276,7 +276,7 @@ Curl_disconnect()
connections so this is not normally called when `curl_easy_perform()` is
used. This function is only used when we are certain that no more transfers
are going to be made on the connection. It can be also closed by force, or
it can be called to make sure that libcurl doesn't keep too many
it can be called to make sure that libcurl does not keep too many
connections alive at the same time.
This function cleans up all resources that are associated with a single
@ -372,14 +372,14 @@ General
more).
`lib/getenv.c` offers `curl_getenv()` which is for reading environment
variables in a neat platform independent way. That's used in the client, but
variables in a neat platform independent way. That is used in the client, but
also in `lib/url.c` when checking the proxy environment variables. Note that
contrary to the normal unix `getenv()`, this returns an allocated buffer that
must be `free()`ed after use.
`lib/netrc.c` holds the `.netrc` parser.
`lib/timeval.c` features replacement functions for systems that don't have
`lib/timeval.c` features replacement functions for systems that do not have
`gettimeofday()` and a few support functions for timeval conversions.
A function named `curl_version()` that returns the full curl version string
@ -408,7 +408,7 @@ Persistent Connections
- When the transfer operation is complete, the connection is left
open. Particular options may tell libcurl not to, and protocols may signal
closure on connections and then they won't be kept open, of course.
closure on connections and then they will not be kept open, of course.
- When `curl_easy_cleanup()` is called, we close all still opened connections,
unless of course the multi interface "owns" the connections.
@ -454,7 +454,7 @@ SSL libraries
Library Symbols
===============
All symbols used internally in libcurl must use a `Curl_` prefix if they're
All symbols used internally in libcurl must use a `Curl_` prefix if they are
used in more than a single file. Single-file symbols must be made static.
Public ("exported") symbols must use a `curl_` prefix. (There are exceptions,
but they are to be changed to follow this pattern in future versions.) Public
@ -465,7 +465,7 @@ Library Symbols
Return Codes and Informationals
===============================
I've made things simple. Almost every function in libcurl returns a CURLcode,
I have made things simple. Almost every function in libcurl returns a CURLcode,
that must be `CURLE_OK` if everything is OK or otherwise a suitable error
code as the `curl/curl.h` include file defines. The place that detects an
error must use the `Curl_failf()` function to set the human-readable error
@ -475,7 +475,7 @@ Return Codes and Informationals
must supply a fair number of informational messages by using the
`Curl_infof()` function. Those messages are only displayed when the user
explicitly asks for them. They are best used when revealing information that
isn't otherwise obvious.
is not otherwise obvious.
<a name="abi"></a>
API/ABI
@ -553,7 +553,7 @@ Test Suite
`httpserver.pl` and `ftpserver.pl` before all the test cases are performed.
The test suite currently only runs on Unix-like platforms.
You'll find a description of the test suite in the `tests/README` file, and
you will find a description of the test suite in the `tests/README` file, and
the test case data files in the `tests/FILEFORMAT` file.
The test suite automatically detects if curl was built with the memory
@ -591,7 +591,7 @@ Asynchronous name resolves
Lastly, I also changed libcurl to be single-threaded rather than
multi-threaded, again this was to prevent some duplicate symbol errors. I'm
not sure why I needed to change everything to single-threaded, but when I
didn't I got redefinition errors for several CRT functions (`malloc()`,
did not I got redefinition errors for several CRT functions (`malloc()`,
`stricmp()`, etc.)
<a name="curl_off_t"></a>
@ -716,8 +716,8 @@ Content Encoding
## `CURLRES_IPV6`
this host has `getaddrinfo()` and family, and thus we use that. The host may
not be able to resolve IPv6, but we don't really have to take that into
account. Hosts that aren't IPv6-enabled have `CURLRES_IPV4` defined.
not be able to resolve IPv6, but we do not really have to take that into
account. Hosts that are not IPv6-enabled have `CURLRES_IPV4` defined.
## `CURLRES_ARES`
@ -799,7 +799,7 @@ Track Down Memory Leaks
This now outputs a report on what resources that were allocated but never
freed etc. This report is fine for posting to the list!
If this doesn't produce any output, no leak was detected in libcurl. Then
If this does not produce any output, no leak was detected in libcurl. Then
the leak is mostly likely to be in your code.
<a name="multi_socket"></a>
@ -853,7 +853,7 @@ Structs in libcurl
==================
This section should cover 7.32.0 pretty accurately, but will make sense even
for older and later versions as things don't change drastically that often.
for older and later versions as things do not change drastically that often.
<a name="Curl_easy"></a>
## Curl_easy
@ -899,7 +899,7 @@ for older and later versions as things don't change drastically that often.
performance boost.
Each `connectdata` identifies a single physical connection to a server. If
the connection can't be kept alive, the connection will be closed after use
the connection cannot be kept alive, the connection will be closed after use
and then this struct can be removed from the cache and freed.
Thus, the same `Curl_easy` can be used multiple times and each time select
@ -977,7 +977,7 @@ for older and later versions as things don't change drastically that often.
The concrete function pointer prototypes can be found in `lib/urldata.h`.
`->scheme` is the URL scheme name, usually spelled out in uppercase. That's
`->scheme` is the URL scheme name, usually spelled out in uppercase. That is
"HTTP" or "FTP" etc. SSL versions of the protocol need their own
`Curl_handler` setup so HTTPS separate from HTTP.
@ -1007,7 +1007,7 @@ for older and later versions as things don't change drastically that often.
`->doing` keeps getting called while issuing the transfer request command(s)
`->done` gets called when the transfer is complete and DONE. That's after the
`->done` gets called when the transfer is complete and DONE. That is after the
main data has been transferred.
`->do_more` gets called during the `DO_MORE` state. The FTP protocol uses
@ -1048,12 +1048,12 @@ for older and later versions as things don't change drastically that often.
limit which "direction" of socket actions that the main engine will
concern itself with.
- `PROTOPT_NONETWORK` - a protocol that doesn't use network (read `file:`)
- `PROTOPT_NONETWORK` - a protocol that does not use network (read `file:`)
- `PROTOPT_NEEDSPWD` - this protocol needs a password and will use a default
one unless one is provided
- `PROTOPT_NOURLQUERY` - this protocol can't handle a query part on the URL
- `PROTOPT_NOURLQUERY` - this protocol cannot handle a query part on the URL
(?foo=bar)
<a name="conncache"></a>
@ -1075,7 +1075,7 @@ for older and later versions as things don't change drastically that often.
holds.
Then individual `Curl_easy` structs can be made to share specific things
that they otherwise wouldn't, such as cookies.
that they otherwise would not, such as cookies.
The `Curl_share` struct can currently hold cookies, DNS cache and the SSL
session cache.

View File

@ -18,19 +18,19 @@ problems may have been fixed or changed somewhat since this was written!
1.5 Expect-100 meets 417
1.6 Unnecessary close when 401 received waiting for 100
1.7 Deflate error after all content was received
1.8 DoH isn't used for all name resolves when enabled
1.8 DoH is not used for all name resolves when enabled
1.11 CURLOPT_SEEKFUNCTION not called with CURLFORM_STREAM
2. TLS
2.1 CURLINFO_SSL_VERIFYRESULT has limited support
2.2 DER in keychain
2.3 Unable to use PKCS12 certificate with Secure Transport
2.4 Secure Transport won't import PKCS#12 client certificates without a password
2.4 Secure Transport will not import PKCS#12 client certificates without a password
2.5 Client cert handling with Issuer DN differs between backends
2.6 CURL_GLOBAL_SSL
2.7 Client cert (MTLS) issues with Schannel
2.8 Schannel disable CURLOPT_SSL_VERIFYPEER and verify hostname
2.9 TLS session cache doesn't work with TFO
2.9 TLS session cache does not work with TFO
2.10 Store TLS context per transfer instead of per connection
2.11 Schannel TLS 1.2 handshake bug in old Windows versions
2.12 FTPS with Schannel times out file list operation
@ -53,7 +53,7 @@ problems may have been fixed or changed somewhat since this was written!
5.2 curl-config --libs contains private details
5.3 curl compiled on OSX 10.13 failed to run on OSX 10.10
5.4 Build with statically built dependency
5.5 can't handle Unicode arguments in non-Unicode builds on Windows
5.5 cannot handle Unicode arguments in non-Unicode builds on Windows
5.7 Visual Studio project gaps
5.8 configure finding libs in wrong directory
5.9 Utilize Requires.private directives in libcurl.pc
@ -66,14 +66,14 @@ problems may have been fixed or changed somewhat since this was written!
6.2 MIT Kerberos for Windows build
6.3 NTLM in system context uses wrong name
6.4 Negotiate and Kerberos V5 need a fake user name
6.5 NTLM doesn't support password with § character
6.5 NTLM does not support password with § character
6.6 libcurl can fail to try alternatives with --proxy-any
6.7 Don't clear digest for single realm
6.7 Do not clear digest for single realm
6.8 RTSP authentication breaks without redirect support
6.9 SHA-256 digest not supported in Windows SSPI builds
6.10 curl never completes Negotiate over HTTP
6.11 Negotiate on Windows fails
6.12 Can't use Secure Transport with Crypto Token Kit
6.12 cannot use Secure Transport with Crypto Token Kit
7. FTP
7.1 FTP without or slow 220 response
@ -89,12 +89,12 @@ problems may have been fixed or changed somewhat since this was written!
7.11 FTPS upload data loss with TLS 1.3
8. TELNET
8.1 TELNET and time limitations don't work
8.1 TELNET and time limitations do not work
8.2 Microsoft telnet server
9. SFTP and SCP
9.1 SFTP doesn't do CURLOPT_POSTQUOTE correct
9.2 wolfssh: publickey auth doesn't work
9.1 SFTP does not do CURLOPT_POSTQUOTE correct
9.2 wolfssh: publickey auth does not work
9.3 Remote recursive folder creation with SFTP
10. SOCKS
@ -104,13 +104,13 @@ problems may have been fixed or changed somewhat since this was written!
11. Internals
11.1 Curl leaks .onion hostnames in DNS
11.2 error buffer not set if connection to multiple addresses fails
11.3 Disconnects don't do verbose
11.3 Disconnects do not do verbose
11.4 HTTP test server 'connection-monitor' problems
11.5 Connection information when using TCP Fast Open
11.6 slow connect to localhost on Windows
11.7 signal-based resolver timeouts
11.8 DoH leaks memory after followlocation
11.9 DoH doesn't inherit all transfer options
11.9 DoH does not inherit all transfer options
11.10 Blocking socket operations in non-blocking API
11.11 A shared connection cache is not thread-safe
11.12 'no_proxy' string-matches IPv6 numerical addresses
@ -122,7 +122,7 @@ problems may have been fixed or changed somewhat since this was written!
12. LDAP
12.1 OpenLDAP hangs after returning results
12.2 LDAP on Windows does authentication wrong?
12.3 LDAP on Windows doesn't work
12.3 LDAP on Windows does not work
12.4 LDAPS with NSS is slow
13. TCP/IP
@ -164,8 +164,8 @@ problems may have been fixed or changed somewhat since this was written!
18.5 HTTP/3 download with quiche halts after a while
18.6 HTTP/3 multipart POST with quiche fails
18.7 HTTP/3 quiche upload large file fails
18.8 HTTP/3 doesn't support client certs
18.9 connection migration doesn't work
18.8 HTTP/3 does not support client certs
18.9 connection migration does not work
==============================================================================
@ -218,11 +218,11 @@ problems may have been fixed or changed somewhat since this was written!
See https://github.com/curl/curl/issues/2719
1.8 DoH isn't used for all name resolves when enabled
1.8 DoH is not used for all name resolves when enabled
Even if DoH is specified to be used, there are some name resolves that are
done without it. This should be fixed. When the internal function
`Curl_resolver_wait_resolv()` is called, it doesn't use DoH to complete the
`Curl_resolver_wait_resolv()` is called, it does not use DoH to complete the
resolve as it otherwise should.
See https://github.com/curl/curl/pull/3857 and
@ -231,11 +231,11 @@ problems may have been fixed or changed somewhat since this was written!
1.11 CURLOPT_SEEKFUNCTION not called with CURLFORM_STREAM
I'm using libcurl to POST form data using a FILE* with the CURLFORM_STREAM
option of curl_formadd(). I've noticed that if the connection drops at just
option of curl_formadd(). I have noticed that if the connection drops at just
the right time, the POST is reattempted without the data from the file. It
seems like the file stream position isn't getting reset to the beginning of
seems like the file stream position is not getting reset to the beginning of
the file. I found the CURLOPT_SEEKFUNCTION option and set that with a
function that performs an fseek() on the FILE*. However, setting that didn't
function that performs an fseek() on the FILE*. However, setting that did not
seem to fix the issue or even get called. See
https://github.com/curl/curl/issues/768
@ -249,14 +249,14 @@ problems may have been fixed or changed somewhat since this was written!
2.2 DER in keychain
Curl doesn't recognize certificates in DER format in keychain, but it works
Curl does not recognize certificates in DER format in keychain, but it works
with PEM. https://curl.se/bug/view.cgi?id=1065
2.3 Unable to use PKCS12 certificate with Secure Transport
See https://github.com/curl/curl/issues/5403
2.4 Secure Transport won't import PKCS#12 client certificates without a password
2.4 Secure Transport will not import PKCS#12 client certificates without a password
libcurl calls SecPKCS12Import with the PKCS#12 client certificate, but that
function rejects certificates that do not have a password.
@ -264,7 +264,7 @@ problems may have been fixed or changed somewhat since this was written!
2.5 Client cert handling with Issuer DN differs between backends
When the specified client certificate doesn't match any of the
When the specified client certificate does not match any of the
server-specified DNs, the OpenSSL and GnuTLS backends behave differently.
The github discussion may contain a solution.
@ -308,7 +308,7 @@ problems may have been fixed or changed somewhat since this was written!
https://github.com/curl/curl/issues/3284
2.9 TLS session cache doesn't work with TFO
2.9 TLS session cache does not work with TFO
See https://github.com/curl/curl/issues/4301
@ -356,7 +356,7 @@ problems may have been fixed or changed somewhat since this was written!
handshake, curl has sent an HTTP request to the server and at the same time
the server has sent a TLS hello request (renegotiate) to curl. Both are
waiting for the other to respond. OpenSSL is supposed to send a handshake
response but doesn't.
response but does not.
https://github.com/curl/curl/issues/6785
https://github.com/openssl/openssl/issues/14722
@ -383,7 +383,7 @@ problems may have been fixed or changed somewhat since this was written!
3.4 AUTH PLAIN for SMTP is not working on all servers
Specifying "--login-options AUTH=PLAIN" on the command line doesn't seem to
Specifying "--login-options AUTH=PLAIN" on the command line does not seem to
work correctly.
See https://github.com/curl/curl/issues/4080
@ -392,7 +392,7 @@ problems may have been fixed or changed somewhat since this was written!
4.1 -J and -O with %-encoded file names
-J/--remote-header-name doesn't decode %-encoded file names. RFC6266 details
-J/--remote-header-name does not decode %-encoded file names. RFC6266 details
how it should be done. The can of worm is basically that we have no charset
handling in curl and ascii >=128 is a challenge for us. Not to mention that
decoding also means that we need to check for nastiness that is attempted,
@ -400,10 +400,10 @@ problems may have been fixed or changed somewhat since this was written!
embedded slashes should be cut off.
https://curl.se/bug/view.cgi?id=1294
-O also doesn't decode %-encoded names, and while it has even less
-O also does not decode %-encoded names, and while it has even less
information about the charset involved the process is similar to the -J case.
Note that we won't add decoding to -O without the user asking for it with
Note that we will not add decoding to -O without the user asking for it with
some other means as well, since -O has always been documented to use the name
exactly as specified in the URL.
@ -418,7 +418,7 @@ problems may have been fixed or changed somewhat since this was written!
4.3 --retry and transfer timeouts
If using --retry and the transfer timeouts (possibly due to using -m or
-y/-Y) the next attempt doesn't resume the transfer properly from what was
-y/-Y) the next attempt does not resume the transfer properly from what was
downloaded in the previous attempt but will truncate and restart at the
original position where it was at before the previous failed attempt. See
https://curl.se/mail/lib-2008-01/0080.html and Mandriva bug report
@ -466,13 +466,13 @@ problems may have been fixed or changed somewhat since this was written!
We welcome help to improve curl's ability to link with static libraries, but
it is likely a task that we can never fully support.
5.5 can't handle Unicode arguments in non-Unicode builds on Windows
5.5 cannot handle Unicode arguments in non-Unicode builds on Windows
If a URL or filename can't be encoded using the user's current codepage then
If a URL or filename cannot be encoded using the user's current codepage then
it can only be encoded properly in the Unicode character set. Windows uses
UTF-16 encoding for Unicode and stores it in wide characters, however curl
and libcurl are not equipped for that at the moment except when built with
_UNICODE and UNICODE defined. And, except for Cygwin, Windows can't use UTF-8
_UNICODE and UNICODE defined. And, except for Cygwin, Windows cannot use UTF-8
as a locale.
https://curl.se/bug/?i=345
@ -532,7 +532,7 @@ problems may have been fixed or changed somewhat since this was written!
number of the Windows builds are flaky. This means that we rarely get all CI
builds go green and complete without errors. This is unfortunate as it makes
us sometimes miss actual build problems and it is surprising to newcomers to
the project who (rightfully) don't expect this.
the project who (rightfully) do not expect this.
See https://github.com/curl/curl/issues/6972
@ -573,7 +573,7 @@ problems may have been fixed or changed somewhat since this was written!
new conn->bits.want_authentication which is set when any of the authentication
options are set.
6.5 NTLM doesn't support password with § character
6.5 NTLM does not support password with § character
https://github.com/curl/curl/issues/2120
@ -583,12 +583,12 @@ problems may have been fixed or changed somewhat since this was written!
authentication will cause libcurl to abort trying other options if the
failed method has a higher preference than the alternatives. As an example,
--proxy-any against a proxy which advertise Negotiate and NTLM, but which
fails to set up Kerberos authentication won't proceed to try authentication
fails to set up Kerberos authentication will not proceed to try authentication
using NTLM.
https://github.com/curl/curl/issues/876
6.7 Don't clear digest for single realm
6.7 Do not clear digest for single realm
https://github.com/curl/curl/issues/3267
@ -614,7 +614,7 @@ problems may have been fixed or changed somewhat since this was written!
6.10 curl never completes Negotiate over HTTP
Apparently it isn't working correctly...?
Apparently it is not working correctly...?
See https://github.com/curl/curl/issues/5235
@ -626,7 +626,7 @@ problems may have been fixed or changed somewhat since this was written!
https://github.com/curl/curl/issues/5881
6.12 Can't use Secure Transport with Crypto Token Kit
6.12 cannot use Secure Transport with Crypto Token Kit
https://github.com/curl/curl/issues/7048
@ -645,7 +645,7 @@ problems may have been fixed or changed somewhat since this was written!
When doing FTP over a socks proxy or CONNECT through HTTP proxy and the multi
interface is used, libcurl will fail if the (passive) TCP connection for the
data transfer isn't more or less instant as the code does not properly wait
data transfer is not more or less instant as the code does not properly wait
for the connect to be confirmed. See test case 564 for a first shot at a test
case.
@ -658,13 +658,13 @@ problems may have been fixed or changed somewhat since this was written!
7.4 FTP with ACCT
When doing an operation over FTP that requires the ACCT command (but not when
logging in), the operation will fail since libcurl doesn't detect this and
logging in), the operation will fail since libcurl does not detect this and
thus fails to issue the correct command:
https://curl.se/bug/view.cgi?id=635
7.5 ASCII FTP
FTP ASCII transfers do not follow RFC959. They don't convert the data
FTP ASCII transfers do not follow RFC959. They do not convert the data
accordingly (not for sending nor for receiving). RFC 959 section 3.1.1.1
clearly describes how this should be done:
@ -700,12 +700,12 @@ problems may have been fixed or changed somewhat since this was written!
When 'multi_done' is called before the transfer has been completed the normal
way, it is considered a "premature" transfer end. In this situation, libcurl
closes the connection assuming it doesn't know the state of the connection so
it can't be reused for subsequent requests.
closes the connection assuming it does not know the state of the connection so
it cannot be reused for subsequent requests.
With FTP however, this isn't necessarily true but there are a bunch of
With FTP however, this is not necessarily true but there are a bunch of
situations (listed in the ftp_done code) where it *could* keep the connection
alive even in this situation - but the current code doesn't. Fixing this would
alive even in this situation - but the current code does not. Fixing this would
allow libcurl to reuse FTP connections better.
7.9 Passive transfer tries only one IP address
@ -734,7 +734,7 @@ problems may have been fixed or changed somewhat since this was written!
message. When curl closes the upload connection if unread data has been
received (such as a TLS handshake message) then the TCP protocol sends an
RST to the server, which may cause the server to discard or truncate the
upload if it hasn't read all sent data yet, and then return an error to curl
upload if it has not read all sent data yet, and then return an error to curl
on the control channel connection.
Since 7.78.0 this is mostly fixed. curl will do a single read before closing
@ -746,9 +746,9 @@ problems may have been fixed or changed somewhat since this was written!
8. TELNET
8.1 TELNET and time limitations don't work
8.1 TELNET and time limitations do not work
When using telnet, the time limitation options don't work.
When using telnet, the time limitation options do not work.
https://curl.se/bug/view.cgi?id=846
8.2 Microsoft telnet server
@ -759,7 +759,7 @@ problems may have been fixed or changed somewhat since this was written!
9. SFTP and SCP
9.1 SFTP doesn't do CURLOPT_POSTQUOTE correct
9.1 SFTP does not do CURLOPT_POSTQUOTE correct
When libcurl sends CURLOPT_POSTQUOTE commands when connected to a SFTP server
using the multi interface, the commands are not being sent correctly and
@ -768,10 +768,10 @@ problems may have been fixed or changed somewhat since this was written!
report but it cannot be accepted as-is. See
https://curl.se/bug/view.cgi?id=748
9.2 wolfssh: publickey auth doesn't work
9.2 wolfssh: publickey auth does not work
When building curl to use the wolfSSH backend for SFTP, the publickey
authentication doesn't work. This is simply functionality not written for curl
authentication does not work. This is simply functionality not written for curl
yet, the necessary API for make this work is provided by wolfSSH.
See https://github.com/curl/curl/issues/4820
@ -788,11 +788,11 @@ problems may have been fixed or changed somewhat since this was written!
10.3 FTPS over SOCKS
libcurl doesn't support FTPS over a SOCKS proxy.
libcurl does not support FTPS over a SOCKS proxy.
10.4 active FTP over a SOCKS
libcurl doesn't support active FTP over a SOCKS proxy
libcurl does not support active FTP over a SOCKS proxy
11. Internals
@ -812,14 +812,14 @@ problems may have been fixed or changed somewhat since this was written!
CURLE_COULDNT_CONNECT. But the error buffer set by CURLOPT_ERRORBUFFER
remains empty. Issue: https://github.com/curl/curl/issues/544
11.3 Disconnects don't do verbose
11.3 Disconnects do not do verbose
Due to how libcurl keeps connections alive in the "connection pool" after use
to potentially transcend the life-time of the initial easy handle that was
used to drive the transfer over that connection, it uses a *separate* and
internal easy handle when it shuts down the connection. That separate
connection might not have the exact same settings as the original easy
handle, and in particular it is often note-worthy that it doesn't have the
handle, and in particular it is often note-worthy that it does not have the
same VERBOSE and debug callbacks setup so that an application will not get
the protocol data for the disconnect phase of a transfer the same way it got
all the other data.
@ -832,7 +832,7 @@ problems may have been fixed or changed somewhat since this was written!
11.4 HTTP test server 'connection-monitor' problems
The 'connection-monitor' feature of the sws HTTP test server doesn't work
The 'connection-monitor' feature of the sws HTTP test server does not work
properly if some tests are run in unexpected order. Like 1509 and then 1525.
See https://github.com/curl/curl/issues/868
@ -853,7 +853,7 @@ problems may have been fixed or changed somewhat since this was written!
HAPPY_EYEBALLS_TIMEOUT define exactly. Lowering that define speeds up the
connection, suggesting a problem in the HE handling.
If we can *know* that we're talking to a local host, we should lower the
If we can *know* that we are talking to a local host, we should lower the
happy eyeballs delay timeout for IPv6 (related: hardcode the "localhost"
addresses, mentioned in TODO). Possibly we should reduce that delay for all.
@ -875,7 +875,7 @@ problems may have been fixed or changed somewhat since this was written!
https://github.com/curl/curl/issues/4592
11.9 DoH doesn't inherit all transfer options
11.9 DoH does not inherit all transfer options
Some options are not inherited because they are not relevant for the DoH SSL
connections, or inheriting the option may result in unexpected behavior. For
@ -903,7 +903,7 @@ problems may have been fixed or changed somewhat since this was written!
11.12 'no_proxy' string-matches IPv6 numerical addresses
This has the downside that "::1" for example doesn't match "::0:1" even
This has the downside that "::1" for example does not match "::0:1" even
though they are in fact the same address.
See https://github.com/curl/curl/issues/5745
@ -972,7 +972,7 @@ problems may have been fixed or changed somewhat since this was written!
https://github.com/curl/curl/issues/3116
12.3 LDAP on Windows doesn't work
12.3 LDAP on Windows does not work
A simple curl command line getting "ldap://ldap.forumsys.com" returns an
error that says "no memory" !
@ -988,7 +988,7 @@ problems may have been fixed or changed somewhat since this was written!
13.1 --interface for ipv6 binds to unusable IP address
Since IPv6 provides a lot of addresses with different scope, binding to an
IPv6 address needs to take the proper care so that it doesn't bind to a
IPv6 address needs to take the proper care so that it does not bind to a
locally scoped address as that is bound to fail.
https://github.com/curl/curl/issues/686
@ -997,7 +997,7 @@ problems may have been fixed or changed somewhat since this was written!
14.1 DICT responses show the underlying protocol
When getting a DICT response, the protocol parts of DICT aren't stripped off
When getting a DICT response, the protocol parts of DICT are not stripped off
from the output.
https://github.com/curl/curl/issues/1809
@ -1019,13 +1019,13 @@ problems may have been fixed or changed somewhat since this was written!
15.4 build docs/curl.1
The cmake build doesn't create the docs/curl.1 file and therefore must rely on
The cmake build does not create the docs/curl.1 file and therefore must rely on
it being there already. This makes the --manual option not work and test
cases like 1139 can't function.
cases like 1139 cannot function.
15.5 build on Linux links libcurl to libdl
... which it shouldn't need to!
... which it should not need to!
See https://github.com/curl/curl/issues/6165
@ -1088,7 +1088,7 @@ problems may have been fixed or changed somewhat since this was written!
This application crashes at startup with libcurl 7.74.0 (and presumably later
versions too) after we cleaned up OpenSSL initialization. Since this is the
only known application to do this, we suspect it is related to something they
are doing in their setup that isn't kosher. We have not been able to get in
are doing in their setup that is not kosher. We have not been able to get in
contact with them nor got any technical details to help us debug this
further.
@ -1163,12 +1163,12 @@ problems may have been fixed or changed somewhat since this was written!
https://github.com/curl/curl/issues/7532
18.8 HTTP/3 doesn't support client certs
18.8 HTTP/3 does not support client certs
aka "mutual authentication".
https://github.com/curl/curl/issues/7625
18.9 connection migration doesn't work
18.9 connection migration does not work
https://github.com/curl/curl/issues/7695

View File

@ -80,7 +80,7 @@ MAIL ETIQUETTE
1.5 Moderation of new posters
Several of the curl mailing lists automatically make all posts from new
subscribers be moderated. This means that after you've subscribed and
subscribers be moderated. This means that after you have subscribed and
sent your first mail to a list, that mail will not be let through to the
list until a mailing list administrator has verified that it is OK and
permits it to get posted.
@ -111,12 +111,12 @@ MAIL ETIQUETTE
anything good and only puts the light even more on the offender: which was
the entire purpose of it getting sent to the list in the first place.
Don't feed the trolls!
Do not feed the trolls!
1.7 How to unsubscribe
You can unsubscribe the same way you subscribed in the first place. You go
to the page for the particular mailing list you're subscribed to and you enter
to the page for the particular mailing list you are subscribed to and you enter
your email address and password and press the unsubscribe button.
Also, the instructions to unsubscribe are included in the headers of every
@ -129,12 +129,12 @@ MAIL ETIQUETTE
1.8 I posted, now what?
If you aren't subscribed with the exact same email address that you used to
If you are not subscribed with the exact same email address that you used to
send the email, your post will just be silently discarded.
If you posted for the first time to the mailing list, you first need to wait
for an administrator to allow your email to go through (moderated). This
normally happens quickly but in case we're asleep, you may have to wait a
normally happens quickly but in case we are asleep, you may have to wait a
few hours.
Once your email goes through it is sent out to several hundred or even
@ -146,7 +146,7 @@ MAIL ETIQUETTE
You do yourself and all of us a service when you include as many details as
possible already in your first email. Mention your operating system and
environment. Tell us which curl version you're using and tell us what you
environment. Tell us which curl version you are using and tell us what you
did, what happened and what you expected would happen. Preferably, show us
what you did with details enough to allow others to help point out the problem
or repeat the same steps in their locations.
@ -194,7 +194,7 @@ MAIL ETIQUETTE
Many mail programs and web archivers use information within mails to keep
them together as "threads", as collections of posts that discuss a certain
subject. If you don't intend to reply on the same or similar subject, don't
subject. If you do not intend to reply on the same or similar subject, do not
just hit reply on an existing mail and change subject, create a new mail.
2.2 Reply to the List
@ -203,7 +203,7 @@ MAIL ETIQUETTE
reply" or "reply to all", and not just reply to the author of the single
mail you reply to.
We're actively discouraging replying back to the single person by setting
We are actively discouraging replying back to the single person by setting
the Reply-To: field in outgoing mails back to the mailing list address,
making it harder for people to mail the author directly, if only by mistake.
@ -215,7 +215,7 @@ MAIL ETIQUETTE
2.4 Do Not Top-Post
If you reply to a message, don't use top-posting. Top-posting is when you
If you reply to a message, do not use top-posting. Top-posting is when you
write the new text at the top of a mail and you insert the previous quoted
mail conversation below. It forces users to read the mail in a backwards
order to properly understand it.
@ -233,13 +233,13 @@ MAIL ETIQUETTE
When you reply to a mail. You let the mail client insert the previous mail
quoted. Then you put the cursor on the first line of the mail and you move
down through the mail, deleting all parts of the quotes that don't add
down through the mail, deleting all parts of the quotes that do not add
context for your comments. When you want to add a comment you do so, inline,
right after the quotes that relate to your comment. Then you continue
downwards again.
When most of the quotes have been removed and you've added your own words,
you're done!
When most of the quotes have been removed and you have added your own words,
you are done!
2.5 HTML is not for mails

View File

@ -243,10 +243,10 @@ For other ways to do HTTP data upload, see the POST section below.
## Verbose / Debug
If curl fails where it isn't supposed to, if the servers don't let you in, if
you can't understand the responses: use the `-v` flag to get verbose
If curl fails where it is not supposed to, if the servers do not let you in, if
you cannot understand the responses: use the `-v` flag to get verbose
fetching. Curl will output lots of info and what it sends and receives in
order to let the user see all client-server interaction (but it won't show you
order to let the user see all client-server interaction (but it will not show you
the actual data).
curl -v ftp://ftp.upload.com/
@ -419,7 +419,7 @@ the "cookie" should be used for (by specifying `path=value`), when the cookie
should expire (`expire=DATE`), for what domain to use it (`domain=NAME`) and
if it should be used on secure connections only (`secure`).
If you've received a page from a server that contains a header like:
If you have received a page from a server that contains a header like:
```http
Set-Cookie: sessionid=boo123; path="/foo";
@ -494,7 +494,7 @@ From left-to-right:
- Curr.Speed - the average transfer speed the last 5 seconds (the first
5 seconds of a transfer is based on less time of course.)
The `-#` option will display a totally different progress bar that doesn't
The `-#` option will display a totally different progress bar that does not
need much explanation!
## Speed Limit
@ -515,8 +515,8 @@ operation must be completed in whole within 30 minutes:
curl -m 1800 -Y 3000 -y 60 www.far-away-site.com
Forcing curl not to transfer data faster than a given rate is also possible,
which might be useful if you're using a limited bandwidth connection and you
don't want your transfer to use all of it (sometimes referred to as
which might be useful if you are using a limited bandwidth connection and you
do not want your transfer to use all of it (sometimes referred to as
"bandwidth throttle").
Make curl transfer data no faster than 10 kilobytes per second:
@ -575,7 +575,7 @@ URL by making a config file similar to:
url = "http://help.with.curl.com/curlhelp.html"
You can specify another config file to be read by using the `-K`/`--config`
flag. If you set config file name to `-` it'll read the config from stdin,
flag. If you set config file name to `-` it will read the config from stdin,
which can be handy if you want to hide options from being visible in process
tables etc:
@ -631,13 +631,13 @@ do this.
The default way for curl is to issue the PASV command which causes the server
to open another port and await another connection performed by the
client. This is good if the client is behind a firewall that doesn't allow
client. This is good if the client is behind a firewall that does not allow
incoming connections.
curl ftp.download.com
If the server, for example, is behind a firewall that doesn't allow
connections on ports other than 21 (or if it just doesn't support the `PASV`
If the server, for example, is behind a firewall that does not allow
connections on ports other than 21 (or if it just does not support the `PASV`
command), the other way to do it is to use the `PORT` command and instruct the
server to connect to the client on the given IP number and port (as parameters
to the PORT command).
@ -811,7 +811,7 @@ with
ALL_PROXY
A comma-separated list of host names that shouldn't go through any proxy is
A comma-separated list of host names that should not go through any proxy is
set in (only an asterisk, `*` matches all hosts)
NO_PROXY
@ -830,10 +830,10 @@ The usage of the `-x`/`--proxy` flag overrides the environment variables.
Unix introduced the `.netrc` concept a long time ago. It is a way for a user
to specify name and password for commonly visited FTP sites in a file so that
you don't have to type them in each time you visit those sites. You realize
you do not have to type them in each time you visit those sites. You realize
this is a big security risk if someone else gets hold of your passwords, so
therefore most unix programs won't read this file unless it is only readable
by yourself (curl doesn't care though).
therefore most unix programs will not read this file unless it is only readable
by yourself (curl does not care though).
Curl supports `.netrc` files if told to (using the `-n`/`--netrc` and
`--netrc-optional` options). This is not restricted to just FTP, so curl can
@ -892,7 +892,7 @@ Other interesting options for it `-t` include:
- `NEW_ENV=<var,val>` Sets an environment variable.
NOTE: The telnet protocol does not specify any way to login with a specified
user and password so curl can't do that automatically. To do that, you need to
user and password so curl cannot do that automatically. To do that, you need to
track when the login prompt is received and send the username and password
accordingly.
@ -909,7 +909,7 @@ better use of the network.
Note that curl cannot use persistent connections for transfers that are used
in subsequence curl invokes. Try to stuff as many URLs as possible on the same
command line if they are using the same host, as that'll make the transfers
command line if they are using the same host, as that will make the transfers
faster. If you use an HTTP proxy for file transfers, practically all transfers
will be persistent.
@ -965,7 +965,7 @@ Available lists include:
### curl-users
Users of the command line tool. How to use it, what doesn't work, new
Users of the command line tool. How to use it, what does not work, new
features, related tools, questions, news, installations, compilations,
running, porting etc.

View File

@ -24,4 +24,4 @@ Remaining limitations:
- Only QoS level 0 is implemented for publish
- No way to set retain flag for publish
- No TLS (mqtts) support
- Naive EAGAIN handling won't handle split messages
- Naive EAGAIN handling will not handle split messages

View File

@ -48,7 +48,7 @@ you are up for a tough argument.
### URL
There should be a documented URL format. If there is an RFC for it there is no
question about it but the syntax doesn't have to be a published RFC. It could
question about it but the syntax does not have to be a published RFC. It could
be enough if it is already in use by other implementations.
If you make up the syntax just in order to be able to propose it to curl, then
@ -80,7 +80,7 @@ As much of the protocol implementation as possible needs to be verified by
curl test cases. We must have the implementation get tested by CI jobs,
torture tests and more.
We've experienced many times in the past how new implementations were brought
We have experienced many times in the past how new implementations were brought
to curl and immediately once the code had been merged, the originator vanished
from the face of the earth. That is fine, but we need to take the necessary
precautions so when it happens we are still fine.
@ -100,11 +100,11 @@ little easier!
The protocol specification itself should be freely available without requiring
any NDA or similar.
## Don't compare
## Do not compare
We are constantly raising the bar and we are constantly improving the
project. A lot of things we did in the past would not be acceptable if done
today. Therefore, you might be tempted to use shortcuts or "hacks" you can
spot other - existing - protocol implementations have used, but there is
nothing to gain from that. The bar has been raised. Former "cheats" won't be
nothing to gain from that. The bar has been raised. Former "cheats" will not be
tolerated anymore.

View File

@ -40,7 +40,7 @@ Example:
Connections are shared fine between different easy handles, but the
"authentication contexts" are not. So for example doing HTTP Digest auth with
one handle for a particular transfer and then continue on with another handle
that reuses the same connection, the second handle can't send the necessary
that reuses the same connection, the second handle cannot send the necessary
Authorization header at once since the context is only kept in the original
easy handle.

View File

@ -2,11 +2,11 @@
# Documentation
You'll find a mix of various documentation in this directory and
you will find a mix of various documentation in this directory and
subdirectories, using several different formats. Some of them are not ideal
for reading directly in your browser.
If you'd rather see the rendered version of the documentation, check out the
If you would rather see the rendered version of the documentation, check out the
curl website's [documentation section](https://curl.se/docs/) for
general curl stuff or the [libcurl section](https://curl.se/libcurl/) for
libcurl related documentation.

View File

@ -99,10 +99,10 @@ This is a private mailing list for discussions on and about curl security
issues.
Who is on this list? There are a couple of criteria you must meet, and then we
might ask you to join the list or you can ask to join it. It really isn't a
might ask you to join the list or you can ask to join it. It really is not a
formal process. We basically only require that you have a long-term presence
in the curl project and you have shown an understanding for the project and
its way of working. You must've been around for a good while and you should
its way of working. You must have been around for a good while and you should
have no plans in vanishing in the near future.
We do not make the list of participants public mostly because it tends to vary

View File

@ -24,12 +24,12 @@
When using said CA bundle to verify a server cert, you will experience
problems if your CA store does not contain the certificates for the
intermediates if the server doesn't provide them.
intermediates if the server does not provide them.
The TLS protocol mandates that the intermediate certificates are sent in the
handshake, but as browsers have ways to survive or work around such
omissions, missing intermediates in TLS handshakes still happen that
browser-users won't notice.
browser-users will not notice.
Browsers work around this problem in two ways: they cache intermediate
certificates from previous transfers and some implement the TLS "AIA"
@ -51,7 +51,7 @@
## Ciphers
Clients give servers a list of ciphers to select from. If the list doesn't
Clients give servers a list of ciphers to select from. If the list does not
include any ciphers the server wants/can use, the connection handshake
fails.
@ -76,7 +76,7 @@
BEAST is the name of a TLS 1.0 attack that surfaced 2011. When adding means
to mitigate this attack, it turned out that some broken servers out there in
the wild didn't work properly with the BEAST mitigation in place.
the wild did not work properly with the BEAST mitigation in place.
To make such broken servers work, the --ssl-allow-beast option was
introduced. Exactly as it sounds, it re-introduces the BEAST vulnerability
@ -89,7 +89,7 @@
depending on the OS or build configuration. The --ssl-no-revoke option was
introduced in 7.44.0 to disable revocation checking but currently is only
supported for Schannel (the native Windows SSL library), with an exception
in the case of Windows' Untrusted Publishers block list which it seems can't
in the case of Windows' Untrusted Publishers block list which it seems cannot
be bypassed. This option may have broader support to accommodate other SSL
backends in the future.

View File

@ -13,7 +13,7 @@ Native SSL
If libcurl was built with Schannel or Secure Transport support (the native SSL
libraries included in Windows and Mac OS X), then this does not apply to
you. Scroll down for details on how the OS-native engines handle SSL
certificates. If you're not sure, then run "curl -V" and read the results. If
certificates. If you are not sure, then run "curl -V" and read the results. If
the version string says `Schannel` in it, then it was built with Schannel
support.
@ -22,11 +22,11 @@ It is about trust
This system is about trust. In your local CA certificate store you have certs
from *trusted* Certificate Authorities that you then can use to verify that the
server certificates you see are valid. They're signed by one of the CAs you
server certificates you see are valid. they are signed by one of the CAs you
trust.
Which CAs do you trust? You can decide to trust the same set of companies your
operating system trusts, or the set one of the known browsers trust. That's
operating system trusts, or the set one of the known browsers trust. That is
basically trust via someone else you trust. You should just be aware that
modern operating systems and browsers are setup to trust *hundreds* of
companies and recent years several such CAs have been found untrustworthy.
@ -42,8 +42,8 @@ If you communicate with HTTPS, FTPS or other TLS-using servers using
certificates that are signed by CAs present in the store, you can be sure
that the remote server really is the one it claims to be.
If the remote server uses a self-signed certificate, if you don't install a CA
cert store, if the server uses a certificate signed by a CA that isn't
If the remote server uses a self-signed certificate, if you do not install a CA
cert store, if the server uses a certificate signed by a CA that is not
included in the store you use or if the remote host is an impostor
impersonating your favorite site, and you want to transfer files from this
server, do one of the following:
@ -103,11 +103,11 @@ server, do one of the following:
certificate store or use it stand-alone as described. Just remember that
the security is no better than the way you obtained the certificate.
4. If you're using the curl command line tool, you can specify your own CA
4. If you are using the curl command line tool, you can specify your own CA
cert file by setting the environment variable `CURL_CA_BUNDLE` to the path
of your choice.
If you're using the curl command line tool on Windows, curl will search
If you are using the curl command line tool on Windows, curl will search
for a CA cert file named "curl-ca-bundle.crt" in these directories and in
this order:
1. application's directory
@ -122,7 +122,7 @@ server, do one of the following:
way for you: [CA Extract](https://curl.se/docs/caextract.html)
Neglecting to use one of the above methods when dealing with a server using a
certificate that isn't signed by one of the certificates in the installed CA
certificate that is not signed by one of the certificates in the installed CA
certificate store, will cause SSL to report an error ("certificate verify
failed") during the handshake and SSL will then refuse further communication
with that server.

View File

@ -34,7 +34,7 @@
1.15 Monitor connections in the connection pool
1.16 Try to URL encode given URL
1.17 Add support for IRIs
1.18 try next proxy if one doesn't work
1.18 try next proxy if one does not work
1.19 provide timing info for each redirect
1.20 SRV and URI DNS records
1.21 netrc caching and sharing
@ -78,7 +78,7 @@
5.3 Rearrange request header order
5.4 Allow SAN names in HTTP/2 server push
5.5 auth= in URLs
5.6 alt-svc should fallback if alt-svc doesn't work
5.6 alt-svc should fallback if alt-svc does not work
6. TELNET
6.1 ditch stdin
@ -170,7 +170,7 @@
19. Build
19.1 roffit
19.2 Enable PIE and RELRO by default
19.3 Don't use GNU libtool on OpenBSD
19.3 Do not use GNU libtool on OpenBSD
19.4 Package curl for Windows in a signed installer
19.5 make configure use --cache-file more and better
@ -201,7 +201,7 @@
1.2 Consult %APPDATA% also for .netrc
%APPDATA%\.netrc is not considered when running on Windows. Shouldn't it?
%APPDATA%\.netrc is not considered when running on Windows. should not it?
See https://github.com/curl/curl/issues/4016
@ -225,7 +225,7 @@
Currently the libssh2 SSH based code uses it, but to remove PATH_MAX from
there we need libssh2 to properly tell us when we pass in a too small buffer
and its current API (as of libssh2 1.2.7) doesn't.
and its current API (as of libssh2 1.2.7) does not.
1.6 native IDN support on macOS
@ -282,7 +282,7 @@
is may cause name resolves to fail unless res_init() is called. We should
consider calling res_init() + retry once unconditionally on all name resolve
failures to mitigate against this. Firefox works like that. Note that Windows
doesn't have res_init() or an alternative.
does not have res_init() or an alternative.
https://github.com/curl/curl/issues/2251
@ -292,7 +292,7 @@
close them with the CURLOPT_CLOSESOCKETFUNCTION callback. However, c-ares
does not use those functions and instead opens and closes the sockets
itself. This means that when curl passes the c-ares socket to the
CURLMOPT_SOCKETFUNCTION it isn't owned by the application like other sockets.
CURLMOPT_SOCKETFUNCTION it is not owned by the application like other sockets.
See https://github.com/curl/curl/issues/2734
@ -322,7 +322,7 @@
reuse purpose it is verified that it is still alive.
Those connections may get closed by the server side for idleness or they may
get a HTTP/2 ping from the peer to verify that they're still alive. By adding
get a HTTP/2 ping from the peer to verify that they are still alive. By adding
monitoring of the connections while in the pool, libcurl can detect dead
connections (and close them) better and earlier, and it can handle HTTP/2
pings to keep such ones alive even when not actively doing transfers on them.
@ -345,7 +345,7 @@
To make that work smoothly for curl users even on Windows, curl would
probably need to be able to convert from several input encodings.
1.18 try next proxy if one doesn't work
1.18 try next proxy if one does not work
Allow an application to specify a list of proxies to try, and failing to
connect to the first go on and try the next instead until the list is
@ -447,8 +447,8 @@
1.32 add asynch getaddrinfo support
Use getaddrinfo_a() to provide an asynch name resolver backend to libcurl
that doesn't use threads and doesn't depend on c-ares. The getaddrinfo_a
function is (probably?) glibc specific but that's a widely used libc among
that does not use threads and does not depend on c-ares. The getaddrinfo_a
function is (probably?) glibc specific but that is a widely used libc among
our users.
https://github.com/curl/curl/pull/6746
@ -457,7 +457,7 @@
2.1 More non-blocking
Make sure we don't ever loop because of non-blocking sockets returning
Make sure we do not ever loop because of non-blocking sockets returning
EWOULDBLOCK or similar. Blocking cases include:
- Name resolves on non-windows unless c-ares or the threaded resolver is used.
@ -496,7 +496,7 @@
2.4 Split connect and authentication process
The multi interface treats the authentication process as part of the connect
phase. As such any failures during authentication won't trigger the relevant
phase. As such any failures during authentication will not trigger the relevant
QUIT or LOGOFF for protocols such as IMAP, POP3 and SMTP.
2.5 Edge-triggered sockets should work
@ -525,7 +525,7 @@
2.8 dynamically decide to use socketpair
For users who don't use curl_multi_wait() or don't care for
For users who do not use curl_multi_wait() or do not care for
curl_multi_wakeup(), we could introduce a way to make libcurl NOT
create a socketpair in the multi handle.
@ -566,7 +566,7 @@
4.5 ASCII support
FTP ASCII transfers do not follow RFC959. They don't convert the data
FTP ASCII transfers do not follow RFC959. They do not convert the data
accordingly.
4.6 GSSAPI via Windows SSPI
@ -636,7 +636,7 @@
Additionally this should be implemented for proxy base URLs as well.
5.6 alt-svc should fallback if alt-svc doesn't work
5.6 alt-svc should fallback if alt-svc does not work
The alt-svc: header provides a set of alternative services for curl to use
instead of the original. If the first attempted one fails, it should try the
@ -655,7 +655,7 @@
6.2 ditch telnet-specific select
Move the telnet support's network select() loop go away and merge the code
into the main transfer loop. Until this is done, the multi interface won't
into the main transfer loop. Until this is done, the multi interface will not
work for telnet.
6.3 feature negotiation debug data
@ -735,7 +735,7 @@
11.4 Create remote directories
Support for creating remote directories when uploading a file to a directory
that doesn't exist on the server, just like --ftp-create-dirs.
that does not exist on the server, just like --ftp-create-dirs.
12. FILE
@ -768,7 +768,7 @@
"Look at SSL cafile - quick traces look to me like these are done on every
request as well, when they should only be necessary once per SSL context (or
once per handle)". The major improvement we can rather easily do is to make
sure we don't create and kill a new SSL "context" for every request, but
sure we do not create and kill a new SSL "context" for every request, but
instead make one for every connection and re-use that SSL context in the same
style connections are re-used. It will make us use slightly more memory but
it will libcurl do less creations and deletions of SSL contexts.
@ -790,7 +790,7 @@
13.6 Provide callback for cert verification
OpenSSL supports a callback for customised verification of the peer
certificate, but this doesn't seem to be exposed in the libcurl APIs. Could
certificate, but this does not seem to be exposed in the libcurl APIs. Could
it be? There's so much that could be done if it were!
13.8 Support DANE
@ -820,7 +820,7 @@
AIA can provide various things like CRLs but more importantly information
about intermediate CA certificates that can allow validation path to be
fulfilled when the HTTPS server doesn't itself provide them.
fulfilled when the HTTPS server does not itself provide them.
Since AIA is about downloading certs on demand to complete a TLS handshake,
it is probably a bit tricky to get done right.
@ -919,7 +919,7 @@
The SFTP code in libcurl checks the file size *before* a transfer starts and
then proceeds to transfer exactly that amount of data. If the remote file
grows while the transfer is in progress libcurl won't notice and will not
grows while the transfer is in progress libcurl will not notice and will not
adapt. The OpenSSH SFTP command line tool does and libcurl could also just
attempt to download more to see if there is more to get...
@ -932,7 +932,7 @@
17.5 SSH over HTTPS proxy with more backends
The SSH based protocols SFTP and SCP didn't work over HTTPS proxy at
The SSH based protocols SFTP and SCP did not work over HTTPS proxy at
all until PR https://github.com/curl/curl/pull/6021 brought the
functionality with the libssh2 backend. Presumably, this support
can/could be added for the other backends as well.
@ -1069,7 +1069,7 @@
When --retry is used and curl actually retries transfer, it should use the
already transferred data and do a resumed transfer for the rest (when
possible) so that it doesn't have to transfer the same data again that was
possible) so that it does not have to transfer the same data again that was
already transferred before the retry.
See https://github.com/curl/curl/issues/1084
@ -1096,7 +1096,7 @@
provides the "may overwrite any file" risk.
This is extra tricky if the original URL has no file name part at all since
then the current code path will error out with an error message, and we can't
then the current code path will error out with an error message, and we cannot
*know* already at that point if curl will be redirected to a URL that has a
file name...
@ -1161,7 +1161,7 @@
- If splitting up the work improves the transfer rate, it could then be done
again. Then again, etc up to a limit.
This way, if transfer B fails (because Range: isn't supported) it will let
This way, if transfer B fails (because Range: is not supported) it will let
transfer A remain the single one. N and M could be set to some sensible
defaults.
@ -1179,7 +1179,7 @@
Users who are for example doing large downloads in CI or remote setups might
want the occasional progress meter update to see that the transfer is
progressing and hasn't stuck, but they may not appreciate the
progressing and has not stuck, but they may not appreciate the
many-times-a-second frequency curl can end up doing it with now.
19. Build
@ -1201,7 +1201,7 @@
to no impact, neither on the performance nor on the general functionality of
curl.
19.3 Don't use GNU libtool on OpenBSD
19.3 Do not use GNU libtool on OpenBSD
When compiling curl on OpenBSD with "--enable-debug" it will give linking
errors when you use GNU libtool. This can be fixed by using the libtool
provided by OpenBSD itself. However for this the user always needs to invoke
@ -1232,8 +1232,8 @@
20.2 nicer lacking perl message
If perl wasn't found by the configure script, don't attempt to run the tests
but explain something nice why it doesn't.
If perl was not found by the configure script, do not attempt to run the tests
but explain something nice why it does not.
20.3 more protocols supported
@ -1248,15 +1248,15 @@
20.5 Add support for concurrent connections
Tests 836, 882 and 938 were designed to verify that separate connections
aren't used when using different login credentials in protocols that
shouldn't re-use a connection under such circumstances.
are not used when using different login credentials in protocols that
should not re-use a connection under such circumstances.
Unfortunately, ftpserver.pl doesn't appear to support multiple concurrent
Unfortunately, ftpserver.pl does not appear to support multiple concurrent
connections. The read while() loop seems to loop until it receives a
disconnect from the client, where it then enters the waiting for connections
loop. When the client opens a second connection to the server, the first
connection hasn't been dropped (unless it has been forced - which we
shouldn't do in these tests) and thus the wait for connections loop is never
connection has not been dropped (unless it has been forced - which we
should not do in these tests) and thus the wait for connections loop is never
entered to receive the second connection.
20.6 Use the RFC6265 test suite
@ -1270,7 +1270,7 @@
20.7 Support LD_PRELOAD on macOS
LD_RELOAD doesn't work on macOS, but there are tests which require it to run
LD_RELOAD does not work on macOS, but there are tests which require it to run
properly. Look into making the preload support in runtests.pl portable such
that it uses DYLD_INSERT_LIBRARIES on macOS.

View File

@ -2,7 +2,7 @@
## Background
This document assumes that you're familiar with HTML and general networking.
This document assumes that you are familiar with HTML and general networking.
The increasing amount of applications moving to the web has made "HTTP
Scripting" more frequently requested and wanted. To be able to automatically
@ -59,7 +59,7 @@
want to know the amount of milliseconds between two points in a transfer. For
those, and other similar situations, the
[`--trace-time`](https://curl.se/docs/manpage.html#--trace-time) option
is what you need. It'll prepend the time to each trace output line:
is what you need. it will prepend the time to each trace output line:
curl --trace-ascii d.txt --trace-time http://example.com/
@ -73,14 +73,14 @@
## Spec
The Uniform Resource Locator format is how you specify the address of a
particular resource on the Internet. You know these, you've seen URLs like
particular resource on the Internet. You know these, you have seen URLs like
https://curl.se or https://yourbank.com a million times. RFC 3986 is the
canonical spec. And yeah, the formal name is not URL, it is URI.
## Host
The host name is usually resolved using DNS or your /etc/hosts file to an IP
address and that's what curl will communicate with. Alternatively you specify
address and that is what curl will communicate with. Alternatively you specify
the IP address directly in the URL instead of a name.
For development and other trying out situations, you can point to a different
@ -92,7 +92,7 @@
## Port number
Each protocol curl supports operates on a default port number, be it over TCP
or in some cases UDP. Normally you don't have to take that into
or in some cases UDP. Normally you do not have to take that into
consideration, but at times you run test servers on other ports or
similar. Then you can specify the port number in the URL with a colon and a
number immediately following the host name. Like when doing HTTP to port
@ -166,7 +166,7 @@
A single curl command line may involve one or many URLs. The most common case
is probably to just use one, but you can specify any amount of URLs. Yes
any. No limits. You'll then get requests repeated over and over for all the
any. No limits. you will then get requests repeated over and over for all the
given URLs.
Example, send two GETs:
@ -185,13 +185,13 @@
## Multiple HTTP methods in a single command line
Sometimes you need to operate on several URLs in a single command line and do
different HTTP methods on each. For this, you'll enjoy the
different HTTP methods on each. For this, you will enjoy the
[`--next`](https://curl.se/docs/manpage.html#-:) option. It is basically
a separator that separates a bunch of options from the next. All the URLs
before `--next` will get the same method and will get all the POST data
merged into one.
When curl reaches the `--next` on the command line, it'll sort of reset the
When curl reaches the `--next` on the command line, it will sort of reset the
method and the POST data and allow a new set.
Perhaps this is best shown with a few examples. To send first a HEAD and then
@ -236,7 +236,7 @@
previous URL.
If the original form was seen on the page `www.example.com/when/birth.html`,
the second page you'll get will become
the second page you will get will become
`www.example.com/when/junk.cgi?birthyear=1905&press=OK`.
Most search engines work this way.
@ -249,13 +249,13 @@
## POST
The GET method makes all input field names get displayed in the URL field of
your browser. That's generally a good thing when you want to be able to
your browser. That is generally a good thing when you want to be able to
bookmark that page with your given data, but it is an obvious disadvantage if
you entered secret information in one of the fields or if there are a large
amount of fields creating a long and unreadable URL.
The HTTP protocol then offers the POST method. This way the client sends the
data separated from the URL and thus you won't see any of it in the URL
data separated from the URL and thus you will not see any of it in the URL
address field.
The form would look similar to the previous one:
@ -315,7 +315,7 @@
A common way for HTML based applications to pass state information between
pages is to add hidden fields to the forms. Hidden fields are already filled
in, they aren't displayed to the user and they get passed along just as all
in, they are not displayed to the user and they get passed along just as all
the other fields.
A similar example form with one visible field, one hidden field and one
@ -329,15 +329,15 @@
</form>
```
To POST this with curl, you won't have to think about if the fields are
hidden or not. To curl they're all the same:
To POST this with curl, you will not have to think about if the fields are
hidden or not. To curl they are all the same:
curl --data "birthyear=1905&press=OK&person=daniel" [URL]
## Figure Out What A POST Looks Like
When you're about fill in a form and send to a server by using curl instead
of a browser, you're of course interested in sending a POST exactly the way
When you are about fill in a form and send to a server by using curl instead
of a browser, you are of course interested in sending a POST exactly the way
your browser does.
An easy way to get to see this, is to save the HTML page with the form on
@ -364,7 +364,7 @@
## Basic Authentication
HTTP Authentication is the ability to tell the server your username and
password so that it can verify that you're allowed to do the request you're
password so that it can verify that you are allowed to do the request you are
doing. The Basic authentication used in HTTP (which is the type curl uses by
default) is **plain text** based, which means it sends username and password
only slightly obfuscated, but still fully readable by anyone that sniffs on
@ -419,7 +419,7 @@
A HTTP request may include a 'referer' field (yes it is misspelled), which
can be used to tell from which URL the client got to this particular
resource. Some programs/scripts check the referer field of requests to verify
that this wasn't arriving from an external site or an unknown page. While
that this was not arriving from an external site or an unknown page. While
this is a stupid way to check something so easily forged, many scripts still
do it. Using curl, you can put anything you want in the referer-field and
thus more easily be able to fool the server into serving your request.
@ -439,14 +439,14 @@
At times, you will see that getting a page with curl will not return the same
page that you see when getting the page with your browser. Then you know it
is time to set the User Agent field to fool the server into thinking you're
is time to set the User Agent field to fool the server into thinking you are
one of those browsers.
To make curl look like Internet Explorer 5 on a Windows 2000 box:
curl --user-agent "Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)" [URL]
Or why not look like you're using Netscape 4.73 on an old Linux box:
Or why not look like you are using Netscape 4.73 on an old Linux box:
curl --user-agent "Mozilla/4.73 [en] (X11; U; Linux 2.2.15 i686)" [URL]
@ -477,7 +477,7 @@
## Other redirects
Browser typically support at least two other ways of redirects that curl
doesn't: first the html may contain a meta refresh tag that asks the browser
does not: first the html may contain a meta refresh tag that asks the browser
to load a specific URL after a set number of seconds, or it may use
javascript to do it.
@ -529,7 +529,7 @@
Curl's "cookie engine" gets enabled when you use the
[`--cookie`](https://curl.se/docs/manpage.html#-b) option. If you only
want curl to understand received cookies, use `--cookie` with a file that
doesn't exist. Example, if you want to let curl understand cookies from a
does not exist. Example, if you want to let curl understand cookies from a
page and follow a location (and thus possibly send back cookies it received),
you can invoke it like:
@ -539,7 +539,7 @@
format that Netscape and Mozilla once used. It is a convenient way to share
cookies between scripts or invokes. The `--cookie` (`-b`) switch
automatically detects if a given file is such a cookie file and parses it,
and by using the `--cookie-jar` (`-c`) option you'll make curl write a new
and by using the `--cookie-jar` (`-c`) option you will make curl write a new
cookie file at the end of an operation:
curl --cookie cookies.txt --cookie-jar newcookies.txt \
@ -580,7 +580,7 @@
verifying the server's certificate against a locally stored CA cert
bundle. Failing the verification will cause curl to deny the connection. You
must then use [`--insecure`](https://curl.se/docs/manpage.html#-k)
(`-k`) in case you want to tell curl to ignore that the server can't be
(`-k`) in case you want to tell curl to ignore that the server cannot be
verified.
More about server certificate verification and ca cert bundles can be read in
@ -628,7 +628,7 @@
curl -X POST http://example.org/
... but curl will still think and act as if it sent a GET so it won't send
... but curl will still think and act as if it sent a GET so it will not send
any request body etc.
# Web Login
@ -651,7 +651,7 @@
Some web-based login systems feature various amounts of javascript, and
sometimes they use such code to set or modify cookie contents. Possibly they
do that to prevent programmed logins, like this manual describes how to...
Anyway, if reading the code isn't enough to let you repeat the behavior
Anyway, if reading the code is not enough to let you repeat the behavior
manually, capturing the HTTP requests done by your browsers and analyzing the
sent cookies is usually a working method to work out how to shortcut the
javascript need.
@ -666,7 +666,7 @@
## Some debug tricks
Many times when you run curl on a site, you'll notice that the site doesn't
Many times when you run curl on a site, you will notice that the site does not
seem to respond the same way to your curl requests as it does to your
browser's.

View File

@ -150,7 +150,7 @@ since it often means passing around the password in plain text and is thus a
security risk.
URLs for IMAP, POP3 and SMTP also support *login options* as part of the
userinfo field. They're provided as a semicolon after the password and then
userinfo field. they are provided as a semicolon after the password and then
the options.
## Hostname
@ -232,7 +232,7 @@ Anything else will make curl fail to parse the URL.
### Windows-specific FILE details
curl accepts that the FILE URL's path starts with a "drive letter". That's a
curl accepts that the FILE URL's path starts with a "drive letter". That is a
single letter `a` to `z` followed by a colon or a pipe character (`|`).
The Windows operating system itself will convert some file accesses to perform
@ -296,7 +296,7 @@ MAILINDEX numbers returned then you could search via URL:
imap://user:password@mail.example.com/INBOX?TEXT%20%22foo%20bar%22
.. but if you wanted matching UID numbers you'd have to use a custom request:
.. but if you wanted matching UID numbers you would have to use a custom request:
imap://user:password@mail.example.com/INBOX -X "UID SEARCH TEXT \"foo bar\""

View File

@ -1,7 +1,7 @@
Version Numbers and Releases
============================
Curl is not only curl. Curl is also libcurl. They're actually individually
Curl is not only curl. Curl is also libcurl. they are actually individually
versioned, but they usually follow each other closely.
The version numbering is always built up using the same system:

View File

@ -7,5 +7,5 @@ Example: --upload-file local --append ftp://example.com/
Added: 4.8
---
When used in an upload, this makes curl append to the target file instead of
overwriting it. If the remote file doesn't exist, it will be created. Note
overwriting it. If the remote file does not exist, it will be created. Note
that this flag is ignored by some SFTP servers (including OpenSSH).

View File

@ -11,7 +11,7 @@ Added: 5.0
Tells curl to use the specified client certificate file when getting a file
with HTTPS, FTPS or another SSL-based protocol. The certificate must be in
PKCS#12 format if using Secure Transport, or PEM format if using any other
engine. If the optional password isn't specified, it will be queried for on
engine. If the optional password is not specified, it will be queried for on
the terminal. Note that this option assumes a \&"certificate" file that is the
private key and the client certificate concatenated! See --cert and --key to
specify them independently.

View File

@ -19,10 +19,10 @@ This command line option will activate the cookie engine that makes curl
record and use cookies. Another way to activate it is to use the --cookie
option.
If the cookie jar can't be created or written to, the whole curl operation
won't fail or even report an error clearly. Using --verbose will get a warning
displayed, but that is the only visible feedback you get about this possibly
lethal situation.
If the cookie jar cannot be created or written to, the whole curl operation
will not fail or even report an error clearly. Using --verbose will get a
warning displayed, but that is the only visible feedback you get about this
possibly lethal situation.
If this option is used several times, the last specified file name will be
used.

View File

@ -15,7 +15,7 @@ data should be in the format "NAME1=VALUE1; NAME2=VALUE2".
If no '=' symbol is used in the argument, it is instead treated as a filename
to read previously stored cookie from. This option also activates the cookie
engine which will make curl record incoming cookies, which may be handy if
you're using this in combination with the --location option or do multiple URL
you are using this in combination with the --location option or do multiple URL
transfers on the same invoke. If the file name is exactly a minus ("-"), curl
will instead read the contents from stdin.
@ -25,7 +25,7 @@ The file format of the file to read cookies from should be plain HTTP headers
The file specified with --cookie is only used as input. No cookies will be
written to the file. To store cookies, use the --cookie-jar option.
If you use the Set-Cookie file format and don't specify a domain then the
If you use the Set-Cookie file format and do not specify a domain then the
cookie is not sent since the domain will never match. To address this, set a
domain in Set-Cookie line (doing that will include sub-domains) or preferably:
use the Netscape format.

View File

@ -19,7 +19,7 @@ curl using one of the following syntaxes:
.RS
.IP "content"
This will make curl URL-encode the content and pass that on. Just be careful
so that the content doesn't contain any = or @ symbols, as that will then make
so that the content does not contain any = or @ symbols, as that will then make
the syntax match one of the other cases below!
.IP "=content"
This will make curl URL-encode the content and pass that on. The preceding =

View File

@ -30,5 +30,5 @@ If you start the data with the letter @, the rest should be a file name to
read the data from, or - if you want curl to read the data from stdin. Posting
data from a file named \&'foobar' would thus be done with --data @foobar. When
--data is told to read from a file like that, carriage returns and newlines
will be stripped out. If you don't want the @ character to have a special
will be stripped out. If you do not want the @ character to have a special
interpretation use --data-raw instead.

View File

@ -10,7 +10,7 @@ Set LEVEL to tell the server what it is allowed to delegate when it
comes to user credentials.
.RS
.IP "none"
Don't allow any delegation.
Do not allow any delegation.
.IP "policy"
Delegates if and only if the OK-AS-DELEGATE flag is set in the Kerberos
service ticket, which is a matter of realm policy.

View File

@ -6,6 +6,6 @@ Category: ftp sftp curl
Example: --ftp-create-dirs -T file ftp://example.com/remote/path/file
Added: 7.10.7
---
When an FTP or SFTP URL/operation uses a path that doesn't currently exist on
When an FTP or SFTP URL/operation uses a path that does not currently exist on
the server, the standard behavior of curl is to fail. Using this option, curl
will instead attempt to create missing directories.

View File

@ -11,7 +11,7 @@ behavior, but using this option can be used to override a previous --ftp-port
option.
If this option is used several times, only the first one is used. Undoing an
enforced passive really isn't doable but you must then instead enforce the
enforced passive really is not doable but you must then instead enforce the
correct --ftp-port again.
Passive mode means that curl will try the EPSV command first and then PASV,

View File

@ -7,4 +7,4 @@ Example: --ftp-ssl-control ftp://example.com
---
Require SSL/TLS for the FTP login, clear for transfer. Allows secure
authentication, but non-encrypted data transfers for efficiency. Fails the
transfer if the server doesn't support SSL/TLS.
transfer if the server does not support SSL/TLS.

View File

@ -16,5 +16,5 @@ If used in combination with --head, the POST data will instead be appended to
the URL with a HEAD request.
If this option is used several times, only the first one is used. This is
because undoing a GET doesn't make sense, but you should then instead enforce
because undoing a GET does not make sense, but you should then instead enforce
the alternative method you prefer.

View File

@ -15,7 +15,7 @@ specify any number of extra headers. Note that if you should add a custom
header that has the same name as one of the internal ones curl would use, your
externally set header will be used instead of the internal one. This allows
you to make even trickier stuff than curl would normally do. You should not
replace internally set headers without knowing perfectly well what you're
replace internally set headers without knowing perfectly well what you are
doing. Remove an internal header by giving a replacement without content on
the right side of the colon, as in: -H \&"Host:". If you send the custom
header with no-value then its header must be terminated with a semicolon, such

View File

@ -12,4 +12,4 @@ files larger than 2 gigabytes.
For FTP (since 7.46.0), skip the RETR command to figure out the size before
downloading a file.
This option doesn't work for HTTP if libcurl was built to use hyper.
This option does not work for HTTP if libcurl was built to use hyper.

View File

@ -10,4 +10,4 @@ Added: 7.9.7
When curl is told to read cookies from a given file, this option will make it
discard all "session cookies". This will basically have the same effect as if
a new session is started. Typical browsers always discard session cookies when
they're closed down.
they are closed down.

View File

@ -8,7 +8,7 @@ Example: --limit-rate 10M $URL
Added: 7.10
---
Specify the maximum transfer rate you want curl to use - for both downloads
and uploads. This feature is useful if you have a limited pipe and you'd like
and uploads. This feature is useful if you have a limited pipe and you would like
your transfer not to use your entire bandwidth. To make it slower than it
otherwise would be.

View File

@ -9,7 +9,7 @@ Example: --list-only ftp://example.com/dir/
(FTP)
When listing an FTP directory, this switch forces a name-only view. This is
especially useful if the user wants to machine-parse the contents of an FTP
directory since the normal directory view doesn't use a standard look or
directory since the normal directory view does not use a standard look or
format. When used like this, the option causes an NLST command to be sent to
the server instead of LIST.

View File

@ -8,5 +8,5 @@ Added: 7.10.4
---
Like --location, but will allow sending the name + password to all hosts that
the site may redirect to. This may or may not introduce a security breach if
the site redirects you to a site to which you'll send your authentication info
(which is plaintext in the case of HTTP Basic authentication).
the site redirects you to a site to which you will send your authentication
info (which is plaintext in the case of HTTP Basic authentication).

View File

@ -11,7 +11,7 @@ location (indicated with a Location: header and a 3XX response code), this
option will make curl redo the request on the new place. If used together with
--include or --head, headers from all requested pages will be shown. When
authentication is used, curl only sends its credentials to the initial
host. If a redirect takes curl to a different host, it won't be able to
host. If a redirect takes curl to a different host, it will not be able to
intercept the user+password. See also --location-trusted on how to change
this. You can limit the amount of redirects to follow by using the
--max-redirs option.

View File

@ -13,6 +13,6 @@ This option requires a library built with GSS-API or SSPI support. Use
When using this option, you must also provide a fake --user option to activate
the authentication code properly. Sending a '-u :' is enough as the user name
and password from the --user option aren't actually used.
and password from the --user option are not actually used.
If this option is used several times, only the first one is used.

View File

@ -9,7 +9,7 @@ Makes curl scan the *.netrc* (*_netrc* on Windows) file in the user's home
directory for login name and password. This is typically used for FTP on
Unix. If used with HTTP, curl will enable user authentication. See
*netrc(5)* and *ftp(1)* for details on the file format. Curl will not
complain if that file doesn't have the right permissions (it should be
complain if that file does not have the right permissions (it should be
neither world- nor group-readable). The environment variable "HOME" is used
to find the home directory.

View File

@ -13,7 +13,7 @@ This option specifies the directory in which files should be stored, when
The given output directory is used for all URLs and output options on the
command line, up until the first --next.
If the specified target directory doesn't exist, the operation will fail
If the specified target directory does not exist, the operation will fail
unless --create-dirs is also used.
If this option is used multiple times, the last specified directory will be

View File

@ -27,7 +27,7 @@ this:
curl -o aa example.com -o bb example.net
and the order of the -o options and the URLs doesn't matter, just that the
and the order of the -o options and the URLs does not matter, just that the
first -o is for the first URL and so on, so the above command line can also be
written as

View File

@ -21,7 +21,7 @@ SMTP, LDAP, etc.
.IP "ALL_PROXY [protocol://]<host>[:port]"
Sets the proxy server to use if no protocol-specific proxy is set.
.IP "NO_PROXY <comma-separated list of hosts/domains>"
list of host names that shouldn't go through any proxy. If set to an asterisk
list of host names that should not go through any proxy. If set to an asterisk
\&'*' only, it matches all hosts. Each name in this list is matched as either
a domain name which contains the hostname, or the hostname itself.
@ -38,13 +38,13 @@ The list of host names can also be include numerical IP addresses, and IPv6
versions should then be given without enclosing brackets.
IPv6 numerical addresses are compared as strings, so they will only match if
the representations are the same: "::1" is the same as "::0:1" but they don't
the representations are the same: "::1" is the same as "::0:1" but they do not
match.
.IP "CURL_SSL_BACKEND <TLS backend>"
If curl was built with support for "MultiSSL", meaning that it has built-in
support for more than one TLS backend, this environment variable can be set to
the case insensitive name of the particular backend to use when curl is
invoked. Setting a name that isn't a built-in alternative will make curl
invoked. Setting a name that is not a built-in alternative will make curl
stay with the default.
SSL backend names (case-insensitive): bearssl, gnutls, gskit, mbedtls,
@ -64,7 +64,7 @@ BoringSSL, GnuTLS, NSS and wolfSSL.
The proxy string may be specified with a protocol:// prefix to specify
alternative proxy protocols. (Added in 7.21.7)
If no protocol is specified in the proxy string or if the string doesn't match
If no protocol is specified in the proxy string or if the string does not match
a supported one, the proxy will be treated as an HTTP proxy.
The supported proxy protocol prefixes are as follows:
@ -95,42 +95,42 @@ A feature or option that was needed to perform the desired request was not
enabled or was explicitly disabled at build-time. To make curl able to do
this, you probably need another build of libcurl!
.IP 5
Couldn't resolve proxy. The given proxy host could not be resolved.
Could not resolve proxy. The given proxy host could not be resolved.
.IP 6
Couldn't resolve host. The given remote host could not be resolved.
Could not resolve host. The given remote host could not be resolved.
.IP 7
Failed to connect to host.
.IP 8
Weird server reply. The server sent data curl couldn't parse.
Weird server reply. The server sent data curl could not parse.
.IP 9
FTP access denied. The server denied login or denied access to the particular
resource or directory you wanted to reach. Most often you tried to change to a
directory that doesn't exist on the server.
directory that does not exist on the server.
.IP 10
FTP accept failed. While waiting for the server to connect back when an active
FTP session is used, an error code was sent over the control connection or
similar.
.IP 11
FTP weird PASS reply. Curl couldn't parse the reply sent to the PASS request.
FTP weird PASS reply. Curl could not parse the reply sent to the PASS request.
.IP 12
During an active FTP session while waiting for the server to connect back to
curl, the timeout expired.
.IP 13
FTP weird PASV reply, Curl couldn't parse the reply sent to the PASV request.
FTP weird PASV reply, Curl could not parse the reply sent to the PASV request.
.IP 14
FTP weird 227 format. Curl couldn't parse the 227-line the server sent.
FTP weird 227 format. Curl could not parse the 227-line the server sent.
.IP 15
FTP can't get host. Couldn't resolve the host IP we got in the 227-line.
FTP cannot use host. Could not resolve the host IP we got in the 227-line.
.IP 16
HTTP/2 error. A problem was detected in the HTTP2 framing layer. This is
somewhat generic and can be one out of several problems, see the error message
for details.
.IP 17
FTP couldn't set binary. Couldn't change transfer method to binary.
FTP could not set binary. Could not change transfer method to binary.
.IP 18
Partial file. Only a part of the file was transferred.
.IP 19
FTP couldn't download/access the given file, the RETR (or similar) command
FTP could not download/access the given file, the RETR (or similar) command
failed.
.IP 21
FTP quote error. A quote command returned error from the server.
@ -139,9 +139,9 @@ HTTP page not retrieved. The requested url was not found or returned another
error with the HTTP error code being 400 or above. This return code only
appears if --fail is used.
.IP 23
Write error. Curl couldn't write data to a local filesystem or similar.
Write error. Curl could not write data to a local filesystem or similar.
.IP 25
FTP couldn't STOR file. The server denied the STOR operation, used for FTP
FTP could not STOR file. The server denied the STOR operation, used for FTP
uploading.
.IP 26
Read error. Various reading problems.
@ -154,18 +154,18 @@ conditions.
FTP PORT failed. The PORT command failed. Not all FTP servers support the PORT
command, try doing a transfer using PASV instead!
.IP 31
FTP couldn't use REST. The REST command failed. This command is used for
FTP could not use REST. The REST command failed. This command is used for
resumed FTP transfers.
.IP 33
HTTP range error. The range "command" didn't work.
HTTP range error. The range "command" did not work.
.IP 34
HTTP post error. Internal post-request generation error.
.IP 35
SSL connect error. The SSL handshaking failed.
.IP 36
Bad download resume. Couldn't continue an earlier aborted download.
Bad download resume. Could not continue an earlier aborted download.
.IP 37
FILE couldn't read file. Failed to open the file. Permissions?
FILE could not read file. Failed to open the file. Permissions?
.IP 38
LDAP cannot bind. LDAP bind operation failed.
.IP 39
@ -189,7 +189,7 @@ Malformed telnet option.
.IP 51
The peer's SSL certificate or SSH MD5 fingerprint was not OK.
.IP 52
The server didn't reply anything, which here is considered an error.
The server did not reply anything, which here is considered an error.
.IP 53
SSL crypto engine not found.
.IP 54
@ -201,7 +201,7 @@ Failure in receiving network data.
.IP 58
Problem with the local certificate.
.IP 59
Couldn't use specified SSL cipher.
Could not use specified SSL cipher.
.IP 60
Peer certificate cannot be authenticated with known CA certificates.
.IP 61

View File

@ -42,7 +42,7 @@ head spin!
curl is powered by libcurl for all transfer-related features. See
*libcurl(3)* for details.
.SH URL
The URL syntax is protocol-dependent. You'll find a detailed description in
The URL syntax is protocol-dependent. You find a detailed description in
RFC 3986.
You can specify multiple URLs or parts of URLs by writing part sets within
@ -187,7 +187,7 @@ or without a space between it and its value, although a space is a recommended
separator. The long "double-dash" form, --data for example, requires a space
between it and its value.
Short version options that don't need any additional values can be used
Short version options that do not need any additional values can be used
immediately next to each other, like for example you can specify all the
options -O, -L and -v at once as -OLv.

View File

@ -40,8 +40,8 @@ the server's response will be unspecified, depending on the server's
configuration.
You should also be aware that many HTTP/1.1 servers do not have this feature
enabled, so that when you attempt to get a range, you'll instead get the whole
document.
enabled, so that when you attempt to get a range, you will instead get the
whole document.
FTP and SFTP range downloads only support the simple 'start-stop' syntax
(optionally with one of the numbers omitted). FTP use depends on the extended

View File

@ -14,6 +14,6 @@ Sends the "Referrer Page" information to the HTTP server. This can also be set
with the --header flag of course. When used with --location you can append
";auto" to the --referer URL to make curl automatically set the previous URL
when it follows a Location: header. The \&";auto" string can be used alone,
even if you don't set an initial --referer.
even if you do not set an initial --referer.
If this option is used several times, the last one will be used.

View File

@ -11,7 +11,7 @@ Content-Disposition filename instead of extracting a filename from the URL.
If the server specifies a file name and a file with that name already exists
in the current working directory it will not be overwritten and an error will
occur. If the server doesn't specify a file name then this option has no
occur. If the server does not specify a file name then this option has no
effect.
There's no attempt to decode %-sequences (yet) in the provided file name, so

View File

@ -8,5 +8,5 @@ Example: --request-target "*" -X OPTIONS $URL
---
Tells curl to use an alternative "target" (path) instead of using the path as
provided in the URL. Particularly useful when wanting to issue HTTP requests
without leading slash or other data that doesn't follow the regular URL
without leading slash or other data that does not follow the regular URL
pattern, like "OPTIONS *".

View File

@ -14,7 +14,7 @@ details and explanations. Common additional HTTP requests include PUT and
DELETE, but related technologies like WebDAV offers PROPFIND, COPY, MOVE and
more.
Normally you don't need this option. All sorts of GET, HEAD, POST and PUT
Normally you do not need this option. All sorts of GET, HEAD, POST and PUT
requests are rather invoked by using dedicated command line options.
This option only changes the actual word used in the HTTP request, it does not
@ -23,7 +23,7 @@ request, using -X HEAD will not suffice. You need to use the --head option.
The method string you set with --request will be used for all requests, which
if you for example use --location may cause unintended side-effects when curl
doesn't change request method according to the HTTP 30x response codes - and
does not change request method according to the HTTP 30x response codes - and
similar.
(FTP)

View File

@ -17,9 +17,9 @@ transfers as close as possible to how they were started, but this is not
possible with redirected input or output. For example, before retrying it
removes output data from a failed partial transfer that was written to an
output file. However this is not true of data redirected to a | pipe or >
file, which are not reset. We strongly suggest don't parse or record output
via redirect in combination with this option, since you may receive duplicate
data.
file, which are not reset. We strongly suggest you do not parse or record
output via redirect in combination with this option, since you may receive
duplicate data.
By default curl will not error on an HTTP response code that indicates an HTTP
error, if the transfer was successful. For example, if a server replies 404

View File

@ -6,8 +6,8 @@ Category: curl
Example: --retry-max-time 30 --retry 10 $URL
---
The retry timer is reset before the first transfer attempt. Retries will be
done as usual (see --retry) as long as the timer hasn't reached this given
limit. Notice that if the timer hasn't reached the limit, the request will be
done as usual (see --retry) as long as the timer has not reached this given
limit. Notice that if the timer has not reached the limit, the request will be
made and while performing, it may take longer than this given time period. To
limit a single request's maximum time, use --max-time. Set this option to
zero to not timeout retries.

View File

@ -8,7 +8,7 @@ Example: --sasl-authzid zid imap://example.com/
Use this authorisation identity (authzid), during SASL PLAIN authentication,
in addition to the authentication identity (authcid) as specified by --user.
If the option isn't specified, the server will derive the authzid from the
If the option is not specified, the server will derive the authzid from the
authcid, but if specified, and depending on the server implementation, it may
be used to access another user's inbox, that the user has been granted access
to, or a shared mailbox for example.

View File

@ -6,7 +6,7 @@ Category: important verbose
Example: -s $URL
Added: 4.0
---
Silent or quiet mode. Don't show progress meter or error messages. Makes Curl
Silent or quiet mode. Do not show progress meter or error messages. Makes Curl
mute. It will still output the data you ask for, potentially even to the
terminal/stdout unless you redirect it.

View File

@ -5,9 +5,9 @@ Category: tls
Example: --ssl-allow-beast $URL
---
This option tells curl to not work around a security flaw in the SSL3 and
TLS1.0 protocols known as BEAST. If this option isn't used, the SSL layer may
use workarounds known to cause interoperability problems with some older SSL
implementations.
TLS1.0 protocols known as BEAST. If this option is not used, the SSL layer
may use workarounds known to cause interoperability problems with some older
SSL implementations.
**WARNING**: this option loosens the SSL security, and by using this flag you
ask for exactly that.

View File

@ -6,6 +6,6 @@ Category: tls
Example: --ssl-reqd ftp://example.com
---
Require SSL/TLS for the connection. Terminates the connection if the server
doesn't support SSL/TLS.
does not support SSL/TLS.
This option was formerly known as --ftp-ssl-reqd.

View File

@ -6,7 +6,7 @@ Category: tls
Example: --ssl pop3://example.com/
---
Try to use SSL/TLS for the connection. Reverts to a non-secure connection if
the server doesn't support SSL/TLS. See also --ftp-ssl-control and --ssl-reqd
the server does not support SSL/TLS. See also --ftp-ssl-control and --ssl-reqd
for different levels of encryption required.
This option was formerly known as --ftp-ssl (Added in 7.11.0). That option

View File

@ -5,7 +5,7 @@ Category: proxy
Example: --suppress-connect-headers --include -x proxy $URL
Added: 7.54.0
---
When --proxytunnel is used and a CONNECT request is made don't output proxy
When --proxytunnel is used and a CONNECT request is made do not output proxy
CONNECT response headers. This option is meant to be used with --dump-header or
--include which are used to show protocol headers in the output. It has no
effect on debug options such as --verbose or --trace, or any statistics.

View File

@ -8,4 +8,4 @@ Turn on the TCP_NODELAY option. See the *curl_easy_setopt(3)* man page for
details about this option.
Since 7.50.2, curl sets this option by default and you need to explicitly
switch it off if you don't want it on.
switch it off if you do not want it on.

View File

@ -11,7 +11,7 @@ Added: 5.8
---
Request a file that has been modified later than the given time and date, or
one that has been modified before that time. The <date expression> can be all
sorts of date strings or if it doesn't match any internal ones, it is taken as
sorts of date strings or if it does not match any internal ones, it is taken as
a filename and tries to get the modification date (mtime) from <file>
instead. See the *curl_getdate(3)* man pages for date expression details.

View File

@ -8,4 +8,4 @@ Example: --tlspassword pwd --tlsuser user $URL
Set password for use with the TLS authentication method specified with
--tlsauthtype. Requires that --tlsuser also be set.
This doesn't work with TLS 1.3.
This option does not work with TLS 1.3.

View File

@ -8,4 +8,4 @@ Example: --tlspassword pwd --tlsuser user $URL
Set username for use with the TLS authentication method specified with
--tlsauthtype. Requires that --tlspassword also is set.
This doesn't work with TLS 1.3.
This option does not work with TLS 1.3.

View File

@ -23,7 +23,7 @@ file instead or similar and never used in clear text in a command line.
When using Kerberos V5 with a Windows based server you should include the
Windows domain name in the user name, in order for the server to successfully
obtain a Kerberos Ticket. If you don't then the initial authentication
obtain a Kerberos Ticket. If you do not, then the initial authentication
handshake may fail.
When using NTLM, the user name can be specified simply as the user name,

View File

@ -14,9 +14,9 @@ normal cases, and a line starting with '*' means additional info provided by
curl.
If you only want HTTP headers in the output, --include might be the option
you're looking for.
you are looking for.
If you think this option still doesn't give you enough details, consider using
If you think this option still does not give you enough details, consider using
--trace or --trace-ascii instead.
This option is global and does not need to be specified for each use of

View File

@ -185,7 +185,7 @@ The URL index number of this transfer, 0-indexed. De-globbed URLs share the
same index number as the origin globbed URL. (Added in 7.75.0)
.TP
.B url_effective
The URL that was fetched last. This is most meaningful if you've told curl
The URL that was fetched last. This is most meaningful if you have told curl
to follow location: headers.
.RE
.IP

View File

@ -16,14 +16,14 @@ Most examples should build fine using a command line like this:
`curl-config --cc --cflags --libs` -o example example.c
Some compilers don't like having the arguments in this order but instead
Some compilers do not like having the arguments in this order but instead
want you do reorganize them like:
`curl-config --cc` -o example example.c `curl-config --cflags --libs`
**Please** do not use the `curl.se` site as a test target for your
libcurl applications/experiments. Even if some of the examples use that site
as a URL at some places, it doesn't mean that the URLs work or that we expect
as a URL at some places, it does not mean that the URLs work or that we expect
you to actually torture our website with your tests! Thanks.
## Examples

View File

@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2020, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2021, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@ -154,7 +154,7 @@ int main(int argc, char **argv)
/* set user name and password for the authentication */
curl_easy_setopt(curl, CURLOPT_USERPWD, "user:password");
/* Now run off and do what you've been told! */
/* Now run off and do what you have been told! */
res = curl_easy_perform(curl);
/* Check for errors */
if(res != CURLE_OK)

View File

@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2020, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2021, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@ -156,7 +156,7 @@ int main(int argc, char *argv[])
/* send all data to this function */
curl_easy_setopt(curl_handle, CURLOPT_WRITEFUNCTION, WriteCallback);
/* some servers don't like requests that are made without a user-agent
/* some servers do not like requests that are made without a user-agent
field, so we provide one */
curl_easy_setopt(curl_handle, CURLOPT_USERAGENT,
"libcurl-speedchecker/" CHKSPEED_VERSION);
@ -206,7 +206,7 @@ int main(int argc, char *argv[])
/* cleanup curl stuff */
curl_easy_cleanup(curl_handle);
/* we're done with libcurl, so clean it up */
/* we are done with libcurl, so clean it up */
curl_global_cleanup();
return 0;

View File

@ -104,7 +104,7 @@ main(void)
return 1;
}
/* HTTP-header style cookie. If you use the Set-Cookie format and don't
/* HTTP-header style cookie. If you use the Set-Cookie format and do not
specify a domain then the cookie is sent for any domain and will not be
modified, likely not what you intended. Starting in 7.43.0 any-domain
cookies will not be exported either. For more information refer to the

View File

@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (c) 2000 - 2020 David Odin (aka DindinX) for MandrakeSoft
* Copyright (c) 2000 - 2021 David Odin (aka DindinX) for MandrakeSoft
*/
/* <DESC>
* use the libcurl in a gtk-threaded application
@ -96,7 +96,7 @@ int main(int argc, char **argv)
gtk_widget_show_all(Window);
if(!g_thread_create(&my_thread, argv[1], FALSE, NULL) != 0)
g_warning("can't create the thread");
g_warning("cannot create the thread");
gdk_threads_enter();
gtk_main();

View File

@ -224,7 +224,7 @@ static void timer_cb(GlobalInfo* g, int revents)
err = read(g->tfd, &count, sizeof(uint64_t));
if(err == -1) {
/* Note that we may call the timer callback even if the timerfd isn't
/* Note that we may call the timer callback even if the timerfd is not
* readable. It's possible that there are multiple events stored in the
* epoll buffer (i.e. the timer may have fired multiple times). The
* event count is cleared after the first call so future events in the
@ -503,7 +503,7 @@ int main(int argc, char **argv)
curl_multi_setopt(g.multi, CURLMOPT_TIMERFUNCTION, multi_timer_cb);
curl_multi_setopt(g.multi, CURLMOPT_TIMERDATA, &g);
/* we don't call any curl_multi_socket*() function yet as we have no handles
/* we do not call any curl_multi_socket*() function yet as we have no handles
added! */
fprintf(MSG_OUT, "Entering wait loop\n");

View File

@ -439,7 +439,7 @@ int main(int argc, char **argv)
curl_multi_setopt(g.multi, CURLMOPT_TIMERFUNCTION, multi_timer_cb);
curl_multi_setopt(g.multi, CURLMOPT_TIMERDATA, &g);
/* we don't call any curl_multi_socket*() function yet as we have no handles
/* we do not call any curl_multi_socket*() function yet as we have no handles
added! */
ev_loop(g.loop, 0);

View File

@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2020, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2021, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@ -38,11 +38,11 @@ int main(void)
fd = fopen("debugit", "rb"); /* open file to upload */
if(!fd)
return 1; /* can't continue */
return 1; /* cannot continue */
/* to get the file size */
if(fstat(fileno(fd), &file_info) != 0)
return 1; /* can't continue */
return 1; /* cannot continue */
curl = curl_easy_init();
if(curl) {

View File

@ -135,7 +135,7 @@ static int fill_buffer(URL_FILE *file, size_t want)
CURLMcode mc; /* curl_multi_fdset() return code */
/* only attempt to fill buffer if transactions still running and buffer
* doesn't exceed required size already
* does not exceed required size already
*/
if((!file->still_running) || (file->buffer_pos > want))
return 0;

View File

@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2020, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2021, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@ -40,7 +40,7 @@ static size_t my_fwrite(void *buffer, size_t size, size_t nmemb, void *stream)
/* open file for writing */
out->stream = fopen(out->filename, "wb");
if(!out->stream)
return -1; /* failure, can't open file to write */
return -1; /* failure, cannot open file to write */
}
return fwrite(buffer, size, nmemb, out->stream);
}

View File

@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2020, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2021, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@ -42,7 +42,7 @@ static size_t my_fwrite(void *buffer, size_t size, size_t nmemb,
/* open file for writing */
out->stream = fopen(out->filename, "wb");
if(!out->stream)
return -1; /* failure, can't open file to write */
return -1; /* failure, cannot open file to write */
}
return fwrite(buffer, size, nmemb, out->stream);
}

View File

@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2020, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2021, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@ -119,7 +119,7 @@ int main(void)
curl_easy_setopt(curl, CURLOPT_INFILESIZE_LARGE,
(curl_off_t)fsize);
/* Now run off and do what you've been told! */
/* Now run off and do what you have been told! */
res = curl_easy_perform(curl);
/* Check for errors */
if(res != CURLE_OK)

View File

@ -81,7 +81,7 @@ int main(void)
/* we pass our 'chunk' struct to the callback function */
curl_easy_setopt(curl_handle, CURLOPT_WRITEDATA, (void *)&chunk);
/* some servers don't like requests that are made without a user-agent
/* some servers do not like requests that are made without a user-agent
field, so we provide one */
curl_easy_setopt(curl_handle, CURLOPT_USERAGENT, "libcurl-agent/1.0");
@ -109,7 +109,7 @@ int main(void)
free(chunk.memory);
/* we're done with libcurl, so clean it up */
/* we are done with libcurl, so clean it up */
curl_global_cleanup();
return 0;

View File

@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2020, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2021, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@ -427,7 +427,7 @@ int main(int argc, char **argv)
curl_multi_setopt(g->multi, CURLMOPT_TIMERFUNCTION, update_timeout_cb);
curl_multi_setopt(g->multi, CURLMOPT_TIMERDATA, g);
/* we don't call any curl_multi_socket*() function yet as we have no handles
/* we do not call any curl_multi_socket*() function yet as we have no handles
added! */
g_main_loop_run(gmain);

View File

@ -447,13 +447,13 @@ int main(int argc, char **argv)
curl_multi_setopt(g.multi, CURLMOPT_TIMERFUNCTION, multi_timer_cb);
curl_multi_setopt(g.multi, CURLMOPT_TIMERDATA, &g);
/* we don't call any curl_multi_socket*() function yet as we have no handles
/* we do not call any curl_multi_socket*() function yet as we have no handles
added! */
event_base_dispatch(g.evbase);
/* this, of course, won't get called since only way to stop this program is
via ctrl-C, but it is here to show how cleanup /would/ be done. */
/* this, of course, will not get called since only way to stop this program
is via ctrl-C, but it is here to show how cleanup /would/ be done. */
clean_fifo(&g);
event_del(&g.timer_event);
event_base_free(g.evbase);

View File

@ -60,7 +60,7 @@ void dumpNode(TidyDoc doc, TidyNode tnod, int indent)
printf(">\n");
}
else {
/* if it doesn't have a name, then it's probably text, cdata, etc... */
/* if it does not have a name, then it's probably text, cdata, etc... */
TidyBuffer buf;
tidyBufInit(&buf);
tidyNodeGetText(doc, child, &buf);

View File

@ -37,7 +37,7 @@
#include <curl/mprintf.h>
#ifndef CURLPIPE_MULTIPLEX
/* This little trick will just make sure that we don't enable pipelining for
/* This little trick will just make sure that we do not enable pipelining for
libcurls old enough to not have this symbol. It is _not_ defined to zero in
a recent libcurl header. */
#define CURLPIPE_MULTIPLEX 0

View File

@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2020, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2021, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@ -103,7 +103,7 @@ static int server_push_callback(CURL *parent,
(void)num_headers; /* unused */
if(pushindex == MAX_FILES)
/* can't fit anymore */
/* cannot fit anymore */
return CURL_PUSH_DENY;
/* write to this buffer */

View File

@ -35,7 +35,7 @@
#include <curl/curl.h>
#ifndef CURLPIPE_MULTIPLEX
#error "too old libcurl, can't do HTTP/2 server push!"
#error "too old libcurl, cannot do HTTP/2 server push!"
#endif
static
@ -180,7 +180,7 @@ static int server_push_callback(CURL *parent,
/* here's a new stream, save it in a new file for each new push */
out = fopen(filename, "wb");
if(!out) {
/* if we can't save it, deny it */
/* if we cannot save it, deny it */
fprintf(stderr, "Failed to create output file for push\n");
return CURL_PUSH_DENY;
}

View File

@ -39,7 +39,7 @@
#include <curl/mprintf.h>
#ifndef CURLPIPE_MULTIPLEX
/* This little trick will just make sure that we don't enable pipelining for
/* This little trick will just make sure that we do not enable pipelining for
libcurls old enough to not have this symbol. It is _not_ defined to zero in
a recent libcurl header. */
#define CURLPIPE_MULTIPLEX 0
@ -124,7 +124,7 @@ int my_trace(CURL *handle, curl_infotype type,
known_offset = 1;
}
secs = epoch_offset + tv.tv_sec;
now = localtime(&secs); /* not thread safe but we don't care */
now = localtime(&secs); /* not thread safe but we do not care */
curl_msnprintf(timebuf, sizeof(timebuf), "%02d:%02d:%02d.%06ld",
now->tm_hour, now->tm_min, now->tm_sec, (long)tv.tv_usec);

View File

@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2020, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2021, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@ -35,7 +35,7 @@ int main(void)
if(curl) {
curl_easy_setopt(curl, CURLOPT_URL, "https://example.com");
/* Forcing HTTP/3 will make the connection fail if the server isn't
/* Forcing HTTP/3 will make the connection fail if the server is not
accessible over QUIC + HTTP/3 on the given host and port.
Consider using CURLOPT_ALTSVC instead! */
curl_easy_setopt(curl, CURLOPT_HTTP_VERSION, (long)CURL_HTTP_VERSION_3);

Some files were not shown because too many files have changed in this diff Show More