From 41d8186c7e1e1cc84fc7199a999a2d7d22cd3ae1 Mon Sep 17 00:00:00 2001 From: Daniel Stenberg Date: Sat, 8 Dec 2007 23:00:00 +0000 Subject: [PATCH] reformat to FAQ/CONTRIBUTE style, for nicer web-look when I apply the magic script(s) on it online --- docs/TODO | 684 +++++++++++++++++++++++++++++++++++------------------- 1 file changed, 439 insertions(+), 245 deletions(-) diff --git a/docs/TODO b/docs/TODO index 11f6f8942f..aae0f847df 100644 --- a/docs/TODO +++ b/docs/TODO @@ -4,346 +4,540 @@ | (__| |_| | _ <| |___ \___|\___/|_| \_\_____| -TODO + Things that could be nice to do in the future Things to do in project cURL. Please tell us what you think, contribute and - send us patches that improve things! Also check the http://curl.haxx.se/dev - web section for various technical development notes. + send us patches that improve things! All bugs documented in the KNOWN_BUGS document are subject for fixing! - LIBCURL + 1. libcurl + 1.1 Zero-copy interface + 1.2 More data sharing + 1.3 struct lifreq + 1.4 Get IP address + 1.5 c-ares ipv6 + 1.5 configure-based info in public headers - * Introduce another callback interface for upload/download that makes one - less copy of data and thus a faster operation. - [http://curl.haxx.se/dev/no_copy_callbacks.txt] + 2. libcurl - multi interface + 2.1 More non-blocking + 2.2 Pause transfers + 2.3 Remove easy interface internally + 2.4 Avoid having to remove/readd handles - * More data sharing. curl_share_* functions already exist and work, and they - can be extended to share more. For example, enable sharing of the ares - channel and the connection cache. + 3. Documentation + 3.1 More and better - * Introduce a new error code indicating authentication problems (for proxy - CONNECT error 407 for example). This cannot be an error code, we must not - return informational stuff as errors, consider a new info returned by - curl_easy_getinfo() http://curl.haxx.se/bug/view.cgi?id=845941 + 4. FTP + 4.1 PRET + 4.2 Alter passive/active on failure and retry + 4.3 Earlier bad letter detection + 4.4 REST for large files + 4.5 FTP proxy support + 4.6 PORT port range + 4.7 ASCII support - * Use 'struct lifreq' and SIOCGLIFADDR instead of 'struct ifreq' and - SIOCGIFADDR on newer Solaris versions as they claim the latter is obsolete. - To support ipv6 interface addresses properly. + 5. HTTP + 5.1 Other HTTP versions with CONNECT + 5.1 Better persistancy for HTTP 1.0 - * Add the following to curl_easy_getinfo(): GET_HTTP_IP, GET_FTP_IP and - GET_FTP_DATA_IP. Return a string with the used IP. Suggested by Alan. + 6. TELNET + 6.1 ditch stdin + 6.2 ditch telnet-specific select - * Add option that changes the interval in which the progress callback is - called at most. + 7. SSL + 7.1 Disable specific versions + 7.2 Provide mytex locking API + 7.3 dumpcert + 7.4 Evaluate SSL patches + 7.5 Cache OpenSSL contexts + 7.6 Export session ids + 7.7 Provide callback for cert verfication + 7.8 Support other SSL libraries + 7.9 Support SRP on the TLS layer + 7.10 improve configure --with-ssl - * Make libcurl built with c-ares use c-ares' IPv6 abilities. They weren't - present when we first added c-ares support but they have been added since! - When this is done and works, we can actually start considering making c-ares - powered libcurl the default build (which of course would require that we'd - bundle the c-ares source code in the libcurl source code releases). + 8. GnuTLS + 8.1 Make NTLM work without OpenSSL functions + 8.2 SSl engine stuff + 8.3 SRP + 8.4 non-blocking + 8.5 check connection - * Make the curl/*.h headers include the proper system includes based on what - was present at the time when configure was run. Currently, the sys/select.h - header is for example included by curl/multi.h only on specific platforms - we know MUST have it. This is error-prone. We therefore want the header - files to adapt to configure results. Those results must be stored in a new - header and they must use a curl name space, i.e not be HAVE_* prefix (as - that would risk collide with other apps that use libcurl and that runs - configure). + 9. LDAP + 9.1 ditch ldap-specific select - Work on this has been started but hasn't been finished, and the initial - patch and some details are found here: - http://curl.haxx.se/mail/lib-2006-12/0084.html + 10. New protocols + 10.1 RTSP + 10.2 RSYNC - LIBCURL - multi interface + 11. Client + 11.1 Content-Disposition + 11.2 sync + 11.3 glob posts + 11.4 prevent file overwriting + 11.5 ftp wildcard download + 11.6 simultaneous parallel transfers + 11.7 provide formpost headers + 11.8 url-specific options - * Make sure we don't ever loop because of non-blocking sockets return - EWOULDBLOCK or similar. The GnuTLS connection etc. + 12. Build + 12.1 roffit - * Make transfers treated more carefully. We need a way to tell libcurl we - have data to write, as the current system expects us to upload data each - time the socket is writable and there is no way to say that we want to - upload data soon just not right now, without that aborting the upload. The - opposite situation should be possible as well, that we tell libcurl we're - ready to accept read data. Today libcurl feeds the data as soon as it is - available for reading, no matter what. + 13. Test suite + 13.1 SSL tunnel + 13.2 nicer lacking perl message + 13.3 more protocols supported + 13.4 more platforms supported - * Make curl_easy_perform() a wrapper-function that simply creates a multi - handle, adds the easy handle to it, runs curl_multi_perform() until the - transfer is done, then detach the easy handle, destroy the multi handle and - return the easy handle's return code. This will thus make everything - internally use and assume the multi interface. The select()-loop should use - curl_multi_socket(). + 14. Next SONAME bump + 14.1 http-style HEAD output for ftp + 14.2 combine error codes - * curl_multi_handle_control() - this can control the easy handle (while) - added to a multi handle in various ways: - o RESTART, unconditionally restart this easy handle's transfer from the - start, re-init the state - o RESTART_COMPLETED, restart this easy handle's transfer but only if the - existing transfer has already completed and it is in a "finished state". - o STOP, just stop this transfer and consider it completed - o PAUSE? - o RESUME? + 15. Next major release + 15.1 cleanup return codes + 15.2 remove obsolete defines + 15.3 size_t + 15.4 remove several functions + 15.5 remove CURLOPT_FAILONERROR - DOCUMENTATION +============================================================================== - * More and better +1. libcurl - FTP +1.1 Zero-copy interface - * PRET is a command that primarily "drftpd" supports, which could be useful - when using libcurl against such a server. It is a non-standard and a rather - oddly designed command, but... - http://curl.haxx.se/bug/feature.cgi?id=1729967 + Introdue another callback interface for upload/download that makes one less + copy of data and thus a faster operation. + [http://curl.haxx.se/dev/no_copy_callbacks.txt] - * When trying to connect passively to a server which only supports active - connections, libcurl returns CURLE_FTP_WEIRD_PASV_REPLY and closes the - connection. There could be a way to fallback to an active connection (and - vice versa). http://curl.haxx.se/bug/feature.cgi?id=1754793 +1.2 More data sharing - * Make the detection of (bad) %0d and %0a codes in FTP url parts earlier in - the process to avoid doing a resolve and connect in vain. + curl_share_* functions already exist and work, and they can be extended to + share more. For example, enable sharing of the ares channel and the + connection cache. - * REST fix for servers not behaving well on >2GB requests. This should fail - if the server doesn't set the pointer to the requested index. The tricky - (impossible?) part is to figure out if the server did the right thing or - not. +1.3 struct lifreq - * Support the most common FTP proxies, Philip Newton provided a list - allegedly from ncftp: - http://curl.haxx.se/mail/archive-2003-04/0126.html + Use 'struct lifreq' and SIOCGLIFADDR instead of 'struct ifreq' and + SIOCGIFADDR on newer Solaris versions as they claim the latter is obsolete. + To support ipv6 interface addresses for network interfaces properly. - * Make CURLOPT_FTPPORT support an additional port number on the IP/if/name, - like "blabla:[port]" or possibly even "blabla:[portfirst]-[portsecond]". - http://curl.haxx.se/bug/feature.cgi?id=1505166 +1.4 Get IP address - * FTP ASCII transfers do not follow RFC959. They don't convert the data - accordingly. + Add the following to curl_easy_getinfo(): GET_HTTP_IP, GET_FTP_IP and + GET_FTP_DATA_IP. Return a string with the used IP. - * Since USERPWD always override the user and password specified in URLs, we - might need another way to specify user+password for anonymous ftp logins. +1.5 c-ares ipv6 - * The FTP code should get a way of returning errors that is known to still - have the control connection alive and sound. Currently, a returned error - from within ftp-functions does not tell if the control connection is still - OK to use or not. This causes libcurl to fail to re-use connections - slightly too often. + Make libcurl built with c-ares use c-ares' IPv6 abilities. They weren't + present when we first added c-ares support but they have been added since! + When this is done and works, we can actually start considering making c-ares + powered libcurl the default build (which of course would require that we'd + bundle the c-ares source code in the libcurl source code releases). - HTTP +1.5 configure-based info in public headers - * When doing CONNECT to a HTTP proxy, libcurl always uses HTTP/1.0. This has - never been reported as causing trouble to anyone, but should be considered - to use the HTTP version the user has chosen. + Make the curl/*.h headers include the proper system includes based on what + was present at the time when configure was run. Currently, the sys/select.h + header is for example included by curl/multi.h only on specific platforms we + know MUST have it. This is error-prone. We therefore want the header files to + adapt to configure results. Those results must be stored in a new header and + they must use a curl name space, i.e not be HAVE_* prefix (as that would risk + collide with other apps that use libcurl and that runs configure). - * "Better" support for persistent connections over HTTP 1.0 - http://curl.haxx.se/bug/feature.cgi?id=1089001 + Work on this has been started but hasn't been finished, and the initial patch + and some details are found here: + http://curl.haxx.se/mail/lib-2006-12/0084.html - TELNET + The remaining problems to solve involve the platforms that can't run + configure. - * Reading input (to send to the remote server) on stdin is a crappy solution - for library purposes. We need to invent a good way for the application to - be able to provide the data to send. +2. libcurl - multi interface - * Move the telnet support's network select() loop go away and merge the code - into the main transfer loop. Until this is done, the multi interface won't - work for telnet. +2.1 More non-blocking - SSL + Make sure we don't ever loop because of non-blocking sockets return + EWOULDBLOCK or similar. The GnuTLS connection etc. - * Provide an option that allows for disabling specific SSL versions, such as - SSLv2 http://curl.haxx.se/bug/feature.cgi?id=1767276 +2.2 Pause transfers - * Provide a libcurl API for setting mutex callbacks in the underlying SSL - library, so that the same application code can use mutex-locking - independently of OpenSSL or GnutTLS being used. + Make transfers treated more carefully. We need a way to tell libcurl we have + data to write, as the current system expects us to upload data each time the + socket is writable and there is no way to say that we want to upload data + soon just not right now, without that aborting the upload. The opposite + situation should be possible as well, that we tell libcurl we're ready to + accept read data. Today libcurl feeds the data as soon as it is available for + reading, no matter what. - * Anton Fedorov's "dumpcert" patch: - http://curl.haxx.se/mail/lib-2004-03/0088.html +2.3 Remove easy interface internally - * Evaluate/apply Gertjan van Wingerde's SSL patches: - http://curl.haxx.se/mail/lib-2004-03/0087.html + Make curl_easy_perform() a wrapper-function that simply creates a multi + handle, adds the easy handle to it, runs curl_multi_perform() until the + transfer is done, then detach the easy handle, destroy the multi handle and + return the easy handle's return code. This will thus make everything + internally use and assume the multi interface. The select()-loop should use + curl_multi_socket(). - * "Look at SSL cafile - quick traces look to me like these are done on every - request as well, when they should only be necessary once per ssl context - (or once per handle)". The major improvement we can rather easily do is to - make sure we don't create and kill a new SSL "context" for every request, - but instead make one for every connection and re-use that SSL context in - the same style connections are re-used. It will make us use slightly more - memory but it will libcurl do less creations and deletions of SSL contexts. +2.4 Avoid having to remove/readd handles - * Add an interface to libcurl that enables "session IDs" to get - exported/imported. Cris Bailiff said: "OpenSSL has functions which can - serialise the current SSL state to a buffer of your choice, and - recover/reset the state from such a buffer at a later date - this is used - by mod_ssl for apache to implement and SSL session ID cache". + curl_multi_handle_control() - this can control the easy handle (while) added + to a multi handle in various ways: - * OpenSSL supports a callback for customised verification of the peer - certificate, but this doesn't seem to be exposed in the libcurl APIs. Could - it be? There's so much that could be done if it were! (brought by Chris - Clark) + o RESTART, unconditionally restart this easy handle's transfer from the + start, re-init the state - * Make curl's SSL layer capable of using other free SSL libraries. Such as - MatrixSSL (http://www.matrixssl.org/). + o RESTART_COMPLETED, restart this easy handle's transfer but only if the + existing transfer has already completed and it is in a "finished state". - * Peter Sylvester's patch for SRP on the TLS layer. - Awaits OpenSSL support for this, no need to support this in libcurl before - there's an OpenSSL release that does it. + o STOP, just stop this transfer and consider it completed - * make the configure --with-ssl option first check for OpenSSL, then GnuTLS, - then NSS... + o PAUSE? - GnuTLS + o RESUME? - * Get NTLM working using the functions provided by libgcrypt, since GnuTLS - already depends on that to function. Not strictly SSL/TLS related, but - hey... Another option is to get available DES and MD4 source code from the - cryptopp library. They are fine license-wise, but are C++. +3. Documentation - * SSL engine stuff? +3.1 More and better - * Work out a common method with Peter Sylvester's OpenSSL-patch for SRP - on the TLS to provide name and password + Exactly - * Fix the connection phase to be non-blocking when multi interface is used +4. FTP - * Add a way to check if the connection seems to be alive, to correspond to - the SSL_peak() way we use with OpenSSL. +4.1 PRET - LDAP + PRET is a command that primarily "drftpd" supports, which could be useful + when using libcurl against such a server. It is a non-standard and a rather + oddly designed command, but... + http://curl.haxx.se/bug/feature.cgi?id=1729967 + +4.2 Alter passive/active on failure and retry + + When trying to connect passively to a server which only supports active + connections, libcurl returns CURLE_FTP_WEIRD_PASV_REPLY and closes the + connection. There could be a way to fallback to an active connection (and + vice versa). http://curl.haxx.se/bug/feature.cgi?id=1754793 + +4.3 Earlier bad letter detection + + Make the detection of (bad) %0d and %0a codes in FTP url parts earlier in the + process to avoid doing a resolve and connect in vain. + +4.4 REST for large files + + REST fix for servers not behaving well on >2GB requests. This should fail if + the server doesn't set the pointer to the requested index. The tricky + (impossible?) part is to figure out if the server did the right thing or not. + +4.5 FTP proxy support + + Support the most common FTP proxies, Philip Newton provided a list allegedly + from ncftp. This is not a subject without debate, and is probably not really + suitable for libcurl. http://curl.haxx.se/mail/archive-2003-04/0126.html + +4.6 PORT port range + + Make CURLOPT_FTPPORT support an additional port number on the IP/if/name, + like "blabla:[port]" or possibly even "blabla:[portfirst]-[portsecond]". + http://curl.haxx.se/bug/feature.cgi?id=1505166 + +4.7 ASCII support + + FTP ASCII transfers do not follow RFC959. They don't convert the data + accordingly. + +5. HTTP + +5.1 Other HTTP versions with CONNECT + + When doing CONNECT to a HTTP proxy, libcurl always uses HTTP/1.0. This has + never been reported as causing trouble to anyone, but should be considered to + use the HTTP version the user has chosen. + +5.1 Better persistancy for HTTP 1.0 + + "Better" support for persistent connections over HTTP 1.0 + http://curl.haxx.se/bug/feature.cgi?id=1089001 + +6. TELNET + +6.1 ditch stdin + +Reading input (to send to the remote server) on stdin is a crappy solution for +library purposes. We need to invent a good way for the application to be able +to provide the data to send. + +6.2 ditch telnet-specific select + + Move the telnet support's network select() loop go away and merge the code + into the main transfer loop. Until this is done, the multi interface won't + work for telnet. + +7. SSL + +7.1 Disable specific versions + + Provide an option that allows for disabling specific SSL versions, such as + SSLv2 http://curl.haxx.se/bug/feature.cgi?id=1767276 + +7.2 Provide mytex locking API + + Provide a libcurl API for setting mutex callbacks in the underlying SSL + library, so that the same application code can use mutex-locking + independently of OpenSSL or GnutTLS being used. + +7.3 dumpcert + + Anton Fedorov's "dumpcert" patch: + http://curl.haxx.se/mail/lib-2004-03/0088.html + +7.4 Evaluate SSL patches + + Evaluate/apply Gertjan van Wingerde's SSL patches: + http://curl.haxx.se/mail/lib-2004-03/0087.html + +7.5 Cache OpenSSL contexts + + "Look at SSL cafile - quick traces look to me like these are done on every + request as well, when they should only be necessary once per ssl context (or + once per handle)". The major improvement we can rather easily do is to make + sure we don't create and kill a new SSL "context" for every request, but + instead make one for every connection and re-use that SSL context in the same + style connections are re-used. It will make us use slightly more memory but + it will libcurl do less creations and deletions of SSL contexts. + +7.6 Export session ids + + Add an interface to libcurl that enables "session IDs" to get + exported/imported. Cris Bailiff said: "OpenSSL has functions which can + serialise the current SSL state to a buffer of your choice, and recover/reset + the state from such a buffer at a later date - this is used by mod_ssl for + apache to implement and SSL session ID cache". + +7.7 Provide callback for cert verfication + + OpenSSL supports a callback for customised verification of the peer + certificate, but this doesn't seem to be exposed in the libcurl APIs. Could + it be? There's so much that could be done if it were! + +7.8 Support other SSL libraries + + Make curl's SSL layer capable of using other free SSL libraries. Such as + MatrixSSL (http://www.matrixssl.org/). + +7.9 Support SRP on the TLS layer + + Peter Sylvester's patch for SRP on the TLS layer. Awaits OpenSSL support for + this, no need to support this in libcurl before there's an OpenSSL release + that does it. + +7.10 improve configure --with-ssl + + make the configure --with-ssl option first check for OpenSSL, then GnuTLS, + then NSS... + +8. GnuTLS + +8.1 Make NTLM work without OpenSSL functions + + Get NTLM working using the functions provided by libgcrypt, since GnuTLS + already depends on that to function. Not strictly SSL/TLS related, but + hey... Another option is to get available DES and MD4 source code from the + cryptopp library. They are fine license-wise, but are C++. + +8.2 SSl engine stuff + + Is this even possible? + +8.3 SRP + + Work out a common method with Peter Sylvester's OpenSSL-patch for SRP on the + TLS to provide name and password. GnuTLS already supports it... + +8.4 non-blocking + + Fix the connection phase to be non-blocking when multi interface is used + +8.5 check connection + + Add a way to check if the connection seems to be alive, to correspond to the + SSL_peak() way we use with OpenSSL. + +9. LDAP + +9.1 ditch ldap-specific select * Look over the implementation. The looping will have to "go away" from the lib/ldap.c source file and get moved to the main network code so that the multi interface and friends will work for LDAP as well. - NEW PROTOCOLS +10. New protocols - * RTSP - RFC2326 (protocol - very HTTP-like, also contains URL description) +10.1 RTSP - * RSYNC (no RFCs for protocol nor URI/URL format). An implementation should - most probably use an existing rsync library, such as librsync. + RFC2326 (protocol - very HTTP-like, also contains URL description) - CLIENT +10.2 RSYNC - * Add option that is similar to -O but that takes the output file name from - the Content-Disposition: header, and/or uses the local file name used in - redirections for the cases the server bounces the request further to a - different file (name): http://curl.haxx.se/bug/feature.cgi?id=1364676 + (no RFCs for protocol nor URI/URL format). An implementation should most + probably use an existing rsync library, such as librsync. - * "curl --sync http://example.com/feed[1-100].rss" or - "curl --sync http://example.net/{index,calendar,history}.html" +11. Client - Downloads a range or set of URLs using the remote name, but only if the - remote file is newer than the local file. A Last-Modified HTTP date header - should also be used to set the mod date on the downloaded file. - (idea from "Brianiac") +11.1 Content-Disposition - * Globbing support for -d and -F, as in 'curl -d "name=foo[0-9]" URL'. - Requested by Dane Jensen and others. This is easily scripted though. + Add option that is similar to -O but that takes the output file name from the + Content-Disposition: header, and/or uses the local file name used in + redirections for the cases the server bounces the request further to a + different file (name): http://curl.haxx.se/bug/feature.cgi?id=1364676 - * Add an option that prevents cURL from overwriting existing local files. When - used, and there already is an existing file with the target file name - (either -O or -o), a number should be appended (and increased if already - existing). So that index.html becomes first index.html.1 and then - index.html.2 etc. Jeff Pohlmeyer suggested. +11.2 sync - * "curl ftp://site.com/*.txt" + "curl --sync http://example.com/feed[1-100].rss" or + "curl --sync http://example.net/{index,calendar,history}.html" - * The client could be told to use maximum N simultaneous parallel transfers - and then just make sure that happens. It should of course not make more - than one connection to the same remote host. This would require the client - to use the multi interface. http://curl.haxx.se/bug/feature.cgi?id=1558595 + Downloads a range or set of URLs using the remote name, but only if the + remote file is newer than the local file. A Last-Modified HTTP date header + should also be used to set the mod date on the downloaded file. - * Extending the capabilities of the multipart formposting. How about leaving - the ';type=foo' syntax as it is and adding an extra tag (headers) which - works like this: curl -F "coolfiles=@fil1.txt;headers=@fil1.hdr" where - fil1.hdr contains extra headers like +11.3 glob posts - Content-Type: text/plain; charset=KOI8-R" - Content-Transfer-Encoding: base64 - X-User-Comment: Please don't use browser specific HTML code + Globbing support for -d and -F, as in 'curl -d "name=foo[0-9]" URL'. + This is easily scripted though. - which should overwrite the program reasonable defaults (plain/text, - 8bit...) (Idea brough to us by kromJx) +11.4 prevent file overwriting - * ability to specify the classic computing suffixes on the range - specifications. For example, to download the first 500 Kilobytes of a file, - be able to specify the following for the -r option: "-r 0-500K" or for the - first 2 Megabytes of a file: "-r 0-2M". (Mark Smith suggested) + Add an option that prevents cURL from overwriting existing local files. When + used, and there already is an existing file with the target file name + (either -O or -o), a number should be appended (and increased if already + existing). So that index.html becomes first index.html.1 and then + index.html.2 etc. - * --data-encode that URL encodes the data before posting - http://curl.haxx.se/mail/archive-2003-11/0091.html (Kevin Roth suggested) +11.5 ftp wildcard download - * Provide a way to make options bound to a specific URL among several on the - command line. Possibly by letting ':' separate options between URLs, - similar to this: + "curl ftp://site.com/*.txt" - curl --data foo --url url.com : \ - --url url2.com : \ - --url url3.com --data foo3 +11.6 simultaneous parallel transfers - (More details: http://curl.haxx.se/mail/archive-2004-07/0133.html) + The client could be told to use maximum N simultaneous parallel transfers and + then just make sure that happens. It should of course not make more than one + connection to the same remote host. This would require the client to use the + multi interface. http://curl.haxx.se/bug/feature.cgi?id=1558595 - The example would do a POST-GET-POST combination on a single command line. +11.7 provide formpost headers - BUILD + Extending the capabilities of the multipart formposting. How about leaving + the ';type=foo' syntax as it is and adding an extra tag (headers) which + works like this: curl -F "coolfiles=@fil1.txt;headers=@fil1.hdr" where + fil1.hdr contains extra headers like - * Consider extending 'roffit' to produce decent ASCII output, and use that - instead of (g)nroff when building src/hugehelp.c + Content-Type: text/plain; charset=KOI8-R" + Content-Transfer-Encoding: base64 + X-User-Comment: Please don't use browser specific HTML code - TEST SUITE + which should overwrite the program reasonable defaults (plain/text, + 8bit...) - * Make our own version of stunnel for simple port forwarding to enable HTTPS - and FTP-SSL tests without the stunnel dependency, and it could allow us to - provide test tools built with either OpenSSL or GnuTLS +11.8 url-specific options - * If perl wasn't found by the configure script, don't attempt to run the - tests but explain something nice why it doesn't. + Provide a way to make options bound to a specific URL among several on the + command line. Possibly by letting ':' separate options between URLs, + similar to this: - * Extend the test suite to include more protocols. The telnet could just do - ftp or http operations (for which we have test servers). + curl --data foo --url url.com : \ + --url url2.com : \ + --url url3.com --data foo3 - * Make the test suite work on more platforms. OpenBSD and Mac OS. Remove - fork()s and it should become even more portable. + (More details: http://curl.haxx.se/mail/archive-2004-07/0133.html) - NEXT soname bump + The example would do a POST-GET-POST combination on a single command line. - * #undef CURL_FTP_HTTPSTYLE_HEAD in lib/ftp.c to remove the HTTP-style headers - from being output in NOBODY requests over ftp +12. Build - * Combine some of the error codes to remove duplicates. The original - numbering should not be changed, and the old identifiers would be - macroed to the new ones in an CURL_NO_OLDIES section to help with - backward compatibility. +12.1 roffit - Candidates for removal and their replacements: + Consider extending 'roffit' to produce decent ASCII output, and use that + instead of (g)nroff when building src/hugehelp.c - CURLE_FILE_COULDNT_READ_FILE => CURLE_REMOTE_FILE_NOT_FOUND - CURLE_FTP_COULDNT_RETR_FILE => CURLE_REMOTE_FILE_NOT_FOUND - CURLE_FTP_COULDNT_USE_REST => CURLE_RANGE_ERROR - CURLE_FUNCTION_NOT_FOUND => CURLE_FAILED_INIT - CURLE_LDAP_INVALID_URL => CURLE_URL_MALFORMAT - CURLE_TFTP_NOSUCHUSER => CURLE_TFTP_ILLEGAL - CURLE_TFTP_NOTFOUND => CURLE_REMOTE_FILE_NOT_FOUND - CURLE_TFTP_PERM => CURLE_REMOTE_ACCESS_DENIED +13. Test suite - NEXT MAJOR RELEASE +13.1 SSL tunnel - * curl_easy_cleanup() returns void, but curl_multi_cleanup() returns a - CURLMcode. These should be changed to be the same. + Make our own version of stunnel for simple port forwarding to enable HTTPS + and FTP-SSL tests without the stunnel dependency, and it could allow us to + provide test tools built with either OpenSSL or GnuTLS - * remove obsolete defines from curl/curl.h +13.2 nicer lacking perl message - * make several functions use size_t instead of int in their APIs + If perl wasn't found by the configure script, don't attempt to run the tests + but explain something nice why it doesn't. - * remove the following functions from the public API: - curl_getenv - curl_mprintf (and variations) - curl_strequal - curl_strnequal +13.3 more protocols supported - They will instead become curlx_ - alternatives. That makes the curl app - still capable of building with them from source. + Extend the test suite to include more protocols. The telnet could just do ftp + or http operations (for which we have test servers). - * Remove support for CURLOPT_FAILONERROR, it has gotten too kludgy and weird - internally. Let the app judge success or not for itself. +13.4 more platforms supported + + Make the test suite work on more platforms. OpenBSD and Mac OS. Remove + fork()s and it should become even more portable. + +14. Next SONAME bump + +14.1 http-style HEAD output for ftp + + #undef CURL_FTP_HTTPSTYLE_HEAD in lib/ftp.c to remove the HTTP-style headers + from being output in NOBODY requests over ftp + +14.2 combine error codes + + Combine some of the error codes to remove duplicates. The original + numbering should not be changed, and the old identifiers would be + macroed to the new ones in an CURL_NO_OLDIES section to help with + backward compatibility. + + Candidates for removal and their replacements: + + CURLE_FILE_COULDNT_READ_FILE => CURLE_REMOTE_FILE_NOT_FOUND + CURLE_FTP_COULDNT_RETR_FILE => CURLE_REMOTE_FILE_NOT_FOUND + CURLE_FTP_COULDNT_USE_REST => CURLE_RANGE_ERROR + CURLE_FUNCTION_NOT_FOUND => CURLE_FAILED_INIT + CURLE_LDAP_INVALID_URL => CURLE_URL_MALFORMAT + CURLE_TFTP_NOSUCHUSER => CURLE_TFTP_ILLEGAL + CURLE_TFTP_NOTFOUND => CURLE_REMOTE_FILE_NOT_FOUND + CURLE_TFTP_PERM => CURLE_REMOTE_ACCESS_DENIED + +15. Next major release + +15.1 cleanup return codes + + curl_easy_cleanup() returns void, but curl_multi_cleanup() returns a + CURLMcode. These should be changed to be the same. + +15.2 remove obsolete defines + + remove obsolete defines from curl/curl.h + +15.3 size_t + + make several functions use size_t instead of int in their APIs + +15.4 remove several functions + + remove the following functions from the public API: + + curl_getenv + + curl_mprintf (and variations) + + curl_strequal + + curl_strnequal + + They will instead become curlx_ - alternatives. That makes the curl app + still capable of building with them from source. + +15.5 remove CURLOPT_FAILONERROR + + Remove support for CURLOPT_FAILONERROR, it has gotten too kludgy and weird + internally. Let the app judge success or not for itself.