The request needs to be read and send in binary mode in order to use
CRLF instead of LF. Adding --upload-file - causes curl to read stdin
in binary mode.
Make this the default for the curl tool (if built with HTTP/2 powers
enabled) unless a specific HTTP version is requested on the command
line.
This should allow more users to get HTTP/2 powers without having to
change anything.
Tests 842, 843, 844, 845, 887, 888, 889, 890, 946, 947, 948 and 949 fail
if a custom port number is specified via the -b option of runtests.pl.
Suggested by: Kamil Dudka
Bug: http://curl.haxx.se/mail/lib-2015-12/0003.html
As POP3 final and continuation responses both begin with a + character,
and both the finalcode and contcode variables in SASLprotoc are set as
such, we cannot tell the difference between them when we are expecting
an optional continuation from the server such as the following:
+ something else from the server
+OK final response
Disabled these tests until such a time we can tell the responses apart.
The tftpd test server now logs all received options and thus all TFTP
test cases need to match them exactly.
Extended test 283 to use and verify --tftp-blksize.
Apparently there are sites out there that do redirects to URLs they
provide in plain UTF-8 or similar. Browsers and wget %-encode such
headers when doing a subsequent request. Now libcurl does too.
Added test 1138 to verify.
Closes#473
- Add new option CURLOPT_DEFAULT_PROTOCOL to allow specifying a default
protocol for schemeless URLs.
- Add new tool option --proto-default to expose
CURLOPT_DEFAULT_PROTOCOL.
In the case of schemeless URLs libcurl will behave in this way:
When the option is used libcurl will use the supplied default.
When the option is not used, libcurl will follow its usual plan of
guessing from the hostname and falling back to 'http'.
New tool option --ssl-no-revoke.
New value CURLSSLOPT_NO_REVOKE for CURLOPT_SSL_OPTIONS.
Currently this option applies only to WinSSL where we have automatic
certificate revocation checking by default. According to the
ssl-compared chart there are other backends that have automatic checking
(NSS, wolfSSL and DarwinSSL) so we could possibly accommodate them at
some later point.
Bug: https://github.com/bagder/curl/issues/264
Reported-by: zenden2k <zenden2k@gmail.com>
When CURL_SOCKET_BAD is returned in the callback, it should be treated
as an error (CURLE_COULDNT_CONNECT) if no other socket is subsequently
created when trying to connect to a server.
Bug: http://curl.haxx.se/mail/lib-2015-06/0047.html
Make the HTTP headers separated by default for improved security and
reduced risk for information leakage.
Bug: http://curl.haxx.se/docs/adv_20150429.html
Reported-by: Yehezkel Horowitz, Oren Souroujon
This commit fixes a regression introduced in curl-7_41_0-186-g261a0fe.
It also introduces a regression test 1424 based on tests 78 and 1423.
Reported-by: Viktor Szakats
Bug: https://github.com/bagder/curl/issues/237
Previously in Curl_http2_switched, we called nghttp2_session_mem_recv to
parse incoming data which were already received while curl was handling
upgrade. But we didn't call nghttp2_session_send, and it led to make
curl not send any response to the received frames. Most likely, we
received SETTINGS from server at this point, so we missed opportunity to
send SETTINGS + ACK. This commit adds missing nghttp2_session_send call
in Curl_http2_switched to fix this issue.
Bug: https://github.com/bagder/curl/issues/192
Reported-by: Stefan Eissing
"name =value" is fine and the space should just be skipped.
Updated test 31 to also test for this.
Bug: https://github.com/bagder/curl/issues/195
Reported-by: cromestant
Help-by: Frank Gevaerts
It seems that some systems (e.g. fairly consistently in some recent
Solaris autobuilds) would manage to get to the connect phase before the
progress callback was called, resulting in a CURLE_COULDNT_CONNECT
error. Reworked the test to point at a test server that never returns a
full result so the progress callback always gets a chance to be called
before the transfer can complete in some other way.
Since we just started make use of free(NULL) in order to simplify code,
this change takes it a step further and:
- converts lots of Curl_safefree() calls to good old free()
- makes Curl_safefree() not check the pointer before free()
The (new) rule of thumb is: if you really want a function call that
frees a pointer and then assigns it to NULL, then use Curl_safefree().
But we will prefer just using free() from now on.
... by using the regular Curl_http_done() method which checks for
that. This makes test 1801 fail consistently with error 56 (which seems
fine) to that test is also updated here.
Reported-by: Ben Darnell
Bug: https://github.com/bagder/curl/issues/166
...after the method line:
"Since the Host field-value is critical information for handling a
request, a user agent SHOULD generate Host as the first header field
following the request-line." / RFC 7230 section 5.4
Additionally, this will also make libcurl ignore multiple specified
custom Host: headers and only use the first one. Test 1121 has been
updated accordingly
Bug: http://curl.haxx.se/bug/view.cgi?id=1491
Reported-by: Rainer Canavan
When checking for a connection to re-use, a proxy-using request must
check for and use a proxy connection and not one based on the host
name!
Added test 1421 to verify
Bug: http://curl.haxx.se/bug/view.cgi?id=1492
* Missing initialisation of upload status caused a seg fault
* Missing data termination caused corrupt data to be uploaded
* Data verification should be performed in <upload> element
* Added missing recipient list cleanup
test1435: a simple test that checks whether a HTTP request can be
performed over the UNIX socket. The hostname/port are interpreted
by sws and should be ignored by cURL.
test1436: test for the ability to do two requests to the same host,
interleaved with one to a different hostname.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
This is the only user of the backtick operator in the command. As the
commands will soon not be executed by a shell anymore (but by perl),
replace the command with its output.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Added !SSPI to the features list of the HTTP digest tests, as SSPI
based builds now use the Windows SSPI messaging API rather than the
internal functions, and we can't control the random numbers that get
used as part of the digest.
Basically since servers often then don't respond well to this and
instead send the full contents and then libcurl would instead error out
with the assumption that the server doesn't support resume. As the data
is then already transfered, this is now considered fine.
Test case 1434 added to verify this. Test case 1042 slightly modified.
Reported-by: hugo
Bug: http://curl.haxx.se/bug/view.cgi?id=1443
HTTP 1.1 is clearly specified to only allow three digit response codes,
and libcurl used sscanf("%3d") for that purpose. This made libcurl
support smaller numbers but not larger. It does now, but we will not
make any specific promises nor document this further since it is going
outside of what HTTP is.
Bug: http://curl.haxx.se/bug/view.cgi?id=1441
Reported-by: Balaji
CURLOPT_COPYPOSTFIELDS with a given CURLOPT_POSTFIELDSIZE does not
require a trailing zero of the data and by making sure this test doesn't
use one we know it works (combined with valgrind).
This change allows runtests.pl to be run from the CMake builddir:
export srcdir=/tmp/curl/tests;
perl -I$srcdir $srcdir/runtests.pl -l
In order to make this possible, all test cases have been moved from
Makefile.am to Makefile.inc.
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Option --pinnedpubkey takes a path to a public key in DER format and
only connect if it matches (currently only implemented with OpenSSL).
Provides CURLOPT_PINNEDPUBLICKEY for curl_easy_setopt().
Extract a public RSA key from a website like so:
openssl s_client -connect google.com:443 2>&1 < /dev/null | \
sed -n '/-----BEGIN/,/-----END/p' | openssl x509 -noout -pubkey \
| openssl rsa -pubin -outform DER > google.com.der
By not detecting and rejecting domain names for partial literal IP
addresses properly when parsing received HTTP cookies, libcurl can be
fooled to both send cookies to wrong sites and to allow arbitrary sites
to set cookies for others.
CVE-2014-3613
Bug: http://curl.haxx.se/docs/adv_20140910A.html
Historically the default "unknown" value for progress.size_dl and
progress.size_ul has been zero, since these values are initialized
implicitly by the calloc that allocates the curl handle that these
variables are a part of. Users of curl that install progress
callbacks may expect these values to always be >= 0.
Currently it is possible for progress.size_dl and progress.size_ul
to by set to a value of -1, if Curl_pgrsSetDownloadSize() or
Curl_pgrsSetUploadSize() are passed a "size" of -1 (which a few
places currently do, and a following patch will add more). So
lets update Curl_pgrsSetDownloadSize() and Curl_pgrsSetUploadSize()
so they make sure that these variables always contain a value that
is >= 0.
Updates test579 and test599.
Signed-off-by: Brandon Casey <drafnel@gmail.com>
... to handle "*/[total]". Also, removed the strange hack that made
CURLOPT_FAILONERROR on a 416 response after a *RESUME_FROM return
CURLE_OK.
Reported-by: Dimitrios Siganos
Bug: http://curl.haxx.se/mail/lib-2014-06/0221.html
Curl_rand() will return a dummy and repatable random value for this
case. Makes it possible to write test cases that verify output.
Also, fake timestamp with CURL_FORCETIME set.
Only when built debug enabled of course.
Curl_ssl_random() was not used anymore so it has been
removed. Curl_rand() is enough.
create_digest_md5_message: generate base64 instead of hex string
curl_sasl: also fix memory leaks in some OOM situations
Added required "debug" feature, missed in commit 1c9aaa0bac, as NTLMv2
calls Curl_rand() which can only be fixed to a specific entropy in
debug builds.
Verifies that the change in 68f0166a92 works as intended and that
different HTTP auth credentials to the same host still re-uses the
connection properly.
If the precision is indeed shorter than the string, don't strlen() to
find the end because that's not how the precision operator works.
I also added a unit test for curl_msnprintf to make sure this works and
that the fix doesn't a few other basic use cases. I found a POSIX
compliance problem that I marked TODO in the unit test, and I figure we
need to add more tests in the future.
Reported-by: Török Edwin
Updated the docs to clarify and the code accordingly, with test 1528 to
verify:
When CURLHEADER_SEPARATE is set and libcurl is asked to send a request
to a proxy but it isn't CONNECT, then _both_ header lists
(CURLOPT_HTTPHEADER and CURLOPT_PROXYHEADER) will be used since the
single request is then made for both the proxy and the server.
This makes it possible to fetch from an IPv6 literal without specifying
the -g option. Globbing remains available elsehwere in the URL.
For example:
curl http://[::1]/file[1-3].txt
This creates no ambiguity, because there is no overlap between the
syntax of valid globs and valid IPv6 literals. Globs contain hyphens
and at most 1 colon, while IPv6 literals have no hyphens, and at least 2
colons.
The peek_ipv6() parser simply whitelists a set of characters and counts
colons, because the real validation happens later on. The character set
includes A-Z, in case someone decides to implement support for scopes
like [fe80::1%25eth0] in the future.
Signed-off-by: Paul Marks <pmarks@google.com>
As the email protocols implement SASL authentication rather than IMAP,
POP3 and SMTP specific authentication, updated the authentication
keywords to reflect this.
The improved connection reuse logic would otherwise create a new
connection for each one, which isn't supported by the test
server, nor expected by the test.
When allowing NTLM, the re-use connection logic was too focused on
finding an existing NTLM connection to use and didn't properly allow
re-use of other ones. This made the logic not re-use perfectly re-usable
connections.
Added test case 1418 and 1419 to verify.
Regression brought in 8ae35102c (curl 7.35.0)
Reported-by: Jeff King
Bug: http://thread.gmane.org/gmane.comp.version-control.git/242213
Do not try to convert line-endings to CRLF on Windows by setting stdout
to binary mode, just like the curl tool does if --ascii is not specified.
This should prevent corrupted stdout line-ending output like CRCRLF.
In order to make the previously naive text-aware tests work with
binary mode on Windows, text-mode is disabled for them if it is not
actually part of the test case and line-endings are corrected.
According to RFC 2616 and RFC 2326 individual protocol elements, like
headers and except the actual content, are terminated by using CRLF.
Therefore the test data files for these protocols need to contain
mixed line-endings if the actual protocol elements use CRLF while
the file uses LF.
Not comma, which is an inconsistency and a mistake probably inherited
from the examples section of RFC1867.
This bug has been present since the day curl started to support
multipart formposts, back in the 90s.
Reported-by: Rob Davies
Bug: http://curl.haxx.se/bug/view.cgi?id=1333
Fix for bug #1303 (030a2b8cb) was not complete.
libcurl still pruned DNS entries added manually
after detecting a dead connection. This test
checks such behavior.
Test-case 1515 reproduces bug #1303, where libcurl
would incorrectly prune DNS entries added via
CURLOPT_RESOLVE after the DNS_CACHE_TIMEOUT had
expired.
The test contains a cookie jar file where one of the cookies has an
expiry date of 1391252187 -- Sat, 1 Feb 2014 10:56:27 GMT which has
now expired. Updated to Wed, 14 Oct 2037 16:36:33 GMT as per test
179.
Reported-by: Adam Sampson
Bug: http://curl.haxx.se/bug/view.cgi?id=1330
Also, make the ftp server return a canned response that doesn't
cause XML verification problems. Although the test file format
isn't technically XML, it's still handy to be able to use XML
tools to verify and manipulate them.
Previously LIST always returned a fixed hardcoded list that the ftp
server code knew about, mostly since the server didn't get any test case
number in the LIST scenario. Starting now, doing a CWD to a directory
named test-[number] will make the test server remember that number and
consider it a test case so that a subsequent LIST command will send the
<data> section of that test case back.
It allows LIST tests to be made more similar to how all other tests
work.
Test 100 was updated to provide its own directory listing.
Verify the change brought in commit 8e11731653061. It makes sure that
returning a failure from the progress callback even very early results
in the correct return code.
Added support for downgrading the SASL authentication mechanism when the
decoding of CRAM-MD5, DIGEST-MD5 and NTLM messages fails. This enhances
the previously added support for graceful cancellation by allowing the
client to retry a lesser SASL mechanism such as LOGIN or PLAIN, or even
APOP / clear text (in the case of POP3 and IMAP) when supported by the
server.
To avoid the regression when users pass in passwords containing semi-
colons, we now drop the ability to set the login options with the same
options. Support for login options in CURLOPT_USERPWD was added in
7.31.0.
Test case 83 was modified to verify that colons and semi-colons can be
used as part of the password when using -u (CURLOPT_USERPWD).
Bug: http://curl.haxx.se/bug/view.cgi?id=1311
Reported-by: Petr Bahula
Assisted-by: Steve Holme
Signed-off-by: Daniel Stenberg <daniel@haxx.se>
A failure during authentication, which is performed as part of the
CONNECT phrase (for IMAP, POP3 and SMTP) is considered by the multi-
interface as being closed prematurely (aka a dead connection). As such
these protocols cannot issue the relevant QUIT or LOGOUT command.
Temporarily fixed the test cases until we can fix this properly.
The error code should not be sent as data as it isn't passed onto the
client as body data, so cannot be compared in the test suite against
expected data.
This is a regression since the switch to always-multi internally
c43127414d.
Test 1316 was modified since we now clearly call the Curl_client_write()
function when doing the LIST transfer part and then the
handler->protocol says FTP and ftpc.transfertype is 'A' which implies
text converting even though that the response is initially a HTTP
CONNECT response in this case.
As the URI, which is contained within the DIGEST-MD5 response, is
constructed from the service and realm, the encoded message differs
from that generated under POP3.