... it also clobbered the 'result' return value so that it wouldn't
return the error back to the parent function properly, which broke test
809 when run with 'multi-always'.
When prefixing a path with /~/ it is supposed to be used relative to the
user's home directory but it didn't work. Now we cut off the entire
three byte sequenct "/~/" which seems to be how OpenSSH does it.
Bug: http://curl.haxx.se/bug/view.cgi?id=1173
Reported by: Balaji Parasuram
Issue: When building a 32bit target with large file support HP-UX
<sys/socket.h> header file may simultaneously provide two different
sets of declarations for sendfile and sendpath functions, one with
static and another with external linkage. Given that we do not use
mentioned functions we really don't care which linkage is the
appropriate one, but on the other hand, the double declaration emmits
warnings when using the HP-UX compiler and errors when using modern
gcc versions resulting in fatal compilation errors.
Mentioned issue is now fixed as long as we don't use sendfile nor
sendpath functions.
A bundle is a list of all persistent connections to the same host.
The connection cache consists of a hash of bundles, with the
hostname as the key.
The benefits may not be obvious, but they are two:
1) Faster search for connections to reuse, since the hash
lookup only finds connections to the host in question.
2) It lays out the groundworks for an upcoming patch,
which will introduce multiple HTTP pipelines.
This patch also removes the awkward list of "closure handles",
which were needed to send QUIT commands to the FTP server
when closing a connection.
Now we allocate a separate closure handle and use that
one to close all connections.
This has been tested in a live system for a few weeks, and of
course passes the test suite.
BLANK_AT_MAKETIME may be used in our Makefile.am files to blank
LIBS variable used in generated makefile at makefile processing
time. Doing this functionally prevents LIBS from being used for
all link targets in given makefile.
This handling already works with the easy-interface code. When a request
is sent on a re-used connection that gets closed by the server at the
same time as the request is sent, the situation may occur so that we can
send the request and we discover the broken connection as a RECV_ERROR
in the PERFORM state and then the request needs to be retried on a fresh
connection. Test 64 broke with 'multi-always-internally'.
Although it is not explicitly stated in the documentation, NSS uses
*pRetCert and *pRetKey even if the client authentication hook returns
a failure. Namely, if we destroy *pRetCert without clearing *pRetCert
afterwards, NSS destroys the certificate once again, which causes a
double free.
Reported by: Bob Relyea
.. that are sent when auth-negotiating before a chunked
upload or when setting the 'Transfer-Encoding: chunked'
header and intentionally sending no content.
Adjust test565 and test1333 accordingly.
DNS cache entries populated with CURLOPT_RESOLVE were not properly freed
again when done using the multi interface.
Test case 1502 added to verify.
Bug: http://curl.haxx.se/bug/view.cgi?id=3575448
Reported by: Alex Gruz
If we use memory functions (malloc, free, strdup etc) in C sources in
libcurl and we fail to include curl_memory.h or memdebug.h we either
fail to properly support user-provided memory callbacks or the memory
leak system of the test suite fails.
After Ajit's report of a failure in the first category in http_proxy.c,
I spotted a few in the second category as well. These problems are now
tested for by test 1132 which runs a perl program that scans for and
attempts to check that we use the correct include files if a memory
related function is used in the source code.
Reported by: Ajit Dhumale
Bug: http://curl.haxx.se/mail/lib-2012-11/0125.html
When using only 1 second precision, curl doesn't create new cnonce
values quickly enough for all uses.
For example, issuing the following command multiple times to a recent
Tomcat causes authentication failures:
curl --digest -utest:test http://tomcat.test.com:8080/manager/list
This is because curl uses the same cnonce for several seconds, but
doesn't increment the nonce counter. Tomcat correctly interprets
this as a replay attack and rejects the request.
When microsecond-precision is available, this commit causes curl to
change cnonce values much more frequently.
With microsecond resolution, increasing the nounce length used in the
headers to 32 was made to further reduce the risk of duplication.
axTLS:
This will make the axTLS backend perform the RFC2818 checks, honoring
the VERIFYHOST setting similar to the OpenSSL backend.
Generic for OpenSSL and axTLS:
Move the hostcheck and cert_hostcheck functions from the lib/ssluse.c
files to make them genericly available for both the OpenSSL, axTLS and
other SSL backends. They are now in the new lib/hostcheck.c file.
CyaSSL:
CyaSSL now also has the RFC2818 checks enabled by default. There is a
limitation that the verifyhost can not be enabled exclusively on the
Subject CN field comparison. This SSL backend will thus behave like the
NSS and the GnuTLS (meaning: RFC2818 ok, or bust). In other words:
setting verifyhost to 0 or 1 will disable the Subject Alt Names checks
too.
Schannel:
Updated the schannel information messages: Split the IP address usage
message from the verifyhost setting and changed the message about
disabling SNI (Server Name Indication, used in HTTP virtual hosting)
into a message stating that the Subject Alternative Names checks are
being disabled when verifyhost is set to 0 or 1. As a side effect of
switching off the RFC2818 related servername checks with
SCH_CRED_NO_SERVERNAME_CHECK
(http://msdn.microsoft.com/en-us/library/aa923430.aspx) the SNI feature
is being disabled. This effect is not documented in MSDN, but Wireshark
output clearly shows the effect (details on the libcurl maillist).
PolarSSL:
Fix the prototype change in PolarSSL of ssl_set_session() and the move
of the peer_cert from the ssl_context to the ssl_session. Found this
change in the PolarSSL SVN between r1316 and r1317 where the
POLARSSL_VERSION_NUMBER was at 0x01010100. But to accommodate the Ubuntu
PolarSSL version 1.1.4 the check is to discriminate between lower then
PolarSSL version 1.2.0 and 1.2.0 and higher. Note: The PolarSSL SVN
trunk jumped from version 1.1.1 to 1.2.0.
Generic:
All the SSL backends are fixed and checked to work with the
ssl.verifyhost as a boolean, which is an internal API change.
The text "additional stuff not fine" text was added for debug purposes a
while ago, but it isn't really helping anyone and for some reason some
Linux distributions provide their libcurls built with debug info still
present and thus (far too many) users get to read this info.
The logic previously checked for a started NTLM negotiation only for
host and not also with proxy, leading to problems doing POSTs over a
proxy NTLM that are larger than 2000 bytes. Now it includes proxy in the
check.
Bug: http://curl.haxx.se/bug/view.cgi?id=3582321
Reported by: John Suprock
The existing logic only cut off the fragment from the separate 'path'
buffer which is used when sending HTTP to hosts. The buffer that held
the full URL used for proxies were not dealt with. It is now.
Test case 5 was updated to use a fragment on a URL over a proxy.
Bug: http://curl.haxx.se/bug/view.cgi?id=3579813