The recent overhaul of the SSL recv function made this treat a
zero returned from gnutls_record_recv() as an error, and this
caused our HTTPS test cases to fail. We leave it to upper layer
code to detect if an EOF is a problem or not.
This code would previously use dns_entry->addr->ai_canonname
instead of the given host name, which caused us grief and
problems since not all our resolver options do the reverse lookup
and I would also guess that it caused problems with KRB5/GSS with
virtual name-based hosts. Now the host name from the URL is used.
As reported in bug report #2987196, the code for ipv6 already did
the setting of this bit correctly so we copied that logic into
the Curl_ipv4_resolve_r() function as well. KRB code is the only
code we know that might need the cannonical name so only resolve
it for such requests!
Prefixing the FTP quote commands with an asterisk really only
worked for the postquote actions. This is now fixed and test case
227 has been extended to verify.
Matt Wixson found and fixed a bug in the SCP/SFTP area where the
code treated a 0 return code from libssh2 to be the same as
EAGAIN while in reality it isn't. The problem caused a hang in
SFTP transfers from a MessageWay server.
strlen() returns size_t, but ssh libraries are wanting 'unsigned int'. Add
explicit casts and use _ex versions of the ssh library calls.
Signed-off-by: Ben Greear <greearb@candelatech.com>
If you pass a URL to pop3 that does not contain a message ID as
part of the URL, it will currently ask for 'INBOX' which just
causes the pop3 server to return an error.
The change makes libcurl treat en empty message ID as a request
for LIST (list of pop3 message IDs). User's code could then
parse this and download individual messages as desired.
Ben Greear brought a patch that from now on allows all protocols
to specify name and user within the URL, in the same manner HTTP
and FTP have been allowed to in the past - although far from all
of the libcurl supported protocols actually have that feature in
their URL definition spec.
Bob Richmond: There's an annoying situation where libcurl will
read new HTTP response data from a socket, then check if it's a
timeout if one is set. If the last packet received constitutes
the end of the response body, libcurl still treats it as a
timeout condition and reports a message like:
"Operation timed out after 3000 milliseconds with 876 out of 876
bytes received"
It should only a timeout if the timer lapsed and we DIDN'T
receive the end of the response body yet.
This commit fixes the cmake build of curl, and cleans up the
cmake code a little. It removes some commented out code and
some trailing whitespace. To get curl to build the binary
tree include/curl directory needed to be added to the include
path. Also, SIZEOF_SHORT needed to be added. A check for the
lack of defines of SIZEOF_* for warnless.c was added.
Kenny To filed the bug report #2963679 with patch to fix a
problem he experienced with doing multi interface HTTP POST over
a proxy using PROXYTUNNEL. He found a case where it would connect
fine but bits.tcpconnect was not set correct so libcurl didn't
work properly.
(http://curl.haxx.se/bug/view.cgi?id=2963679)
Akos Pasztory filed debian bug report #572276http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=572276
mentioning a problem with a resource that returns chunked-encoded
_and_ with a Content-Length and libcurl failed to properly ignore
the latter information.
Hauke Duden provided an example program that made the multi
interface crash. His example simply used the multi interface and
did first one FTP transfer and after completion it used a second
easy handle and did another FTP transfer on the same FTP server.
This triggered a bug in the "delayed easy handle kill" system
that curl uses: when an FTP connection is left alive it must keep
an easy handle around internally - only for the purpose of having
an easy handle when it later disconnects it. The code assumed
that when the easy handle was removed and an internal reference
was made, that version could be killed later on when a new easy
handle came using the same connection. This was wrong as Hauke's
example showed that the removed handle wasn't killed for real
until later. This caused a double close attempt => segfault.
Looking at the code of Curl_resolv_timeout() in hostip.c, I think
that in case of a timeout, the signal handler for SIGALRM never
gets removed. I think that in my case it gets executed at some
point later on when execution has long left Curl_resolv_timeout()
or even the cURL library.
The code that is jumped to with siglongjmp() simply sets the
error message to "name lookup timed out" and then returns with
CURLRESOLV_ERROR. I guess that instead of simply returning
without cleaning up, the code should have a goto that jumps to
the spot right after the call to Curl_resolv().