2005-01-29 21:54:15 +08:00
|
|
|
Online: http://curl.haxx.se/docs/httpscripting.html
|
2011-01-19 20:06:36 +08:00
|
|
|
Date: Jan 19, 2011
|
2000-08-12 01:03:44 +08:00
|
|
|
|
|
|
|
The Art Of Scripting HTTP Requests Using Curl
|
|
|
|
=============================================
|
|
|
|
|
|
|
|
This document will assume that you're familiar with HTML and general
|
|
|
|
networking.
|
|
|
|
|
|
|
|
The possibility to write scripts is essential to make a good computer
|
|
|
|
system. Unix' capability to be extended by shell scripts and various tools to
|
|
|
|
run various automated commands and scripts is one reason why it has succeeded
|
|
|
|
so well.
|
|
|
|
|
|
|
|
The increasing amount of applications moving to the web has made "HTTP
|
|
|
|
Scripting" more frequently requested and wanted. To be able to automatically
|
|
|
|
extract information from the web, to fake users, to post or upload data to
|
|
|
|
web servers are all important tasks today.
|
|
|
|
|
|
|
|
Curl is a command line tool for doing all sorts of URL manipulations and
|
|
|
|
transfers, but this particular document will focus on how to use it when
|
|
|
|
doing HTTP requests for fun and profit. I'll assume that you know how to
|
|
|
|
invoke 'curl --help' or 'curl --manual' to get basic information about it.
|
|
|
|
|
|
|
|
Curl is not written to do everything for you. It makes the requests, it gets
|
|
|
|
the data, it sends data and it retrieves the information. You probably need
|
|
|
|
to glue everything together using some kind of script language or repeated
|
|
|
|
manual invokes.
|
|
|
|
|
|
|
|
1. The HTTP Protocol
|
|
|
|
|
|
|
|
HTTP is the protocol used to fetch data from web servers. It is a very simple
|
2000-09-15 22:15:16 +08:00
|
|
|
protocol that is built upon TCP/IP. The protocol also allows information to
|
2000-08-12 01:03:44 +08:00
|
|
|
get sent to the server from the client using a few different methods, as will
|
|
|
|
be shown here.
|
|
|
|
|
|
|
|
HTTP is plain ASCII text lines being sent by the client to a server to
|
|
|
|
request a particular action, and then the server replies a few text lines
|
|
|
|
before the actual requested content is sent to the client.
|
|
|
|
|
2011-01-19 20:06:36 +08:00
|
|
|
The client, curl, sends a HTTP request. The request contains a method (like
|
|
|
|
GET, POST, HEAD etc), a number of request headers and sometimes a request
|
|
|
|
body. The HTTP server responds with a status line (indicating if things went
|
|
|
|
well), response headers and most often also a response body. The "body" part
|
|
|
|
is the plain data you requested, like the actual HTML or the image etc.
|
|
|
|
|
|
|
|
1.1 See the Protocol
|
|
|
|
|
|
|
|
Using curl's option --verbose (-v as a short option) will display what kind
|
|
|
|
of commands curl sends to the server, as well as a few other informational
|
|
|
|
texts.
|
|
|
|
|
|
|
|
--verbose is the single most useful option when it comes to debug or even
|
|
|
|
understand the curl<->server interaction.
|
|
|
|
|
|
|
|
Sometimes even --verbose is not enough. Then --trace and --trace-ascii offer
|
|
|
|
even more details as they show EVERYTHING curl sends and receives. Use it
|
|
|
|
like this:
|
|
|
|
|
|
|
|
curl --trace-ascii debugdump.txt http://www.example.com/
|
2000-08-12 01:03:44 +08:00
|
|
|
|
|
|
|
2. URL
|
|
|
|
|
|
|
|
The Uniform Resource Locator format is how you specify the address of a
|
2001-10-31 21:42:38 +08:00
|
|
|
particular resource on the Internet. You know these, you've seen URLs like
|
2000-08-12 01:03:44 +08:00
|
|
|
http://curl.haxx.se or https://yourbank.com a million times.
|
|
|
|
|
|
|
|
3. GET a page
|
|
|
|
|
|
|
|
The simplest and most common request/operation made using HTTP is to get a
|
|
|
|
URL. The URL could itself refer to a web page, an image or a file. The client
|
|
|
|
issues a GET request to the server and receives the document it asked for.
|
2001-10-31 21:42:38 +08:00
|
|
|
If you issue the command line
|
2000-08-12 01:03:44 +08:00
|
|
|
|
|
|
|
curl http://curl.haxx.se
|
|
|
|
|
|
|
|
you get a web page returned in your terminal window. The entire HTML document
|
|
|
|
that that URL holds.
|
|
|
|
|
2011-01-19 20:06:36 +08:00
|
|
|
All HTTP replies contain a set of response headers that are normally hidden,
|
|
|
|
use curl's --include (-i) option to display them as well as the rest of the
|
|
|
|
document. You can also ask the remote server for ONLY the headers by using
|
|
|
|
the --head (-I) option (which will make curl issue a HEAD request).
|
2000-08-12 01:03:44 +08:00
|
|
|
|
|
|
|
4. Forms
|
|
|
|
|
|
|
|
Forms are the general way a web site can present a HTML page with fields for
|
|
|
|
the user to enter data in, and then press some kind of 'OK' or 'submit'
|
|
|
|
button to get that data sent to the server. The server then typically uses
|
|
|
|
the posted data to decide how to act. Like using the entered words to search
|
|
|
|
in a database, or to add the info in a bug track system, display the entered
|
|
|
|
address on a map or using the info as a login-prompt verifying that the user
|
|
|
|
is allowed to see what it is about to see.
|
|
|
|
|
|
|
|
Of course there has to be some kind of program in the server end to receive
|
|
|
|
the data you send. You cannot just invent something out of the air.
|
|
|
|
|
|
|
|
4.1 GET
|
|
|
|
|
|
|
|
A GET-form uses the method GET, as specified in HTML like:
|
|
|
|
|
|
|
|
<form method="GET" action="junk.cgi">
|
|
|
|
<input type=text name="birthyear">
|
|
|
|
<input type=submit name=press value="OK">
|
|
|
|
</form>
|
|
|
|
|
2001-10-31 21:42:38 +08:00
|
|
|
In your favorite browser, this form will appear with a text box to fill in
|
2000-08-12 01:03:44 +08:00
|
|
|
and a press-button labeled "OK". If you fill in '1905' and press the OK
|
|
|
|
button, your browser will then create a new URL to get for you. The URL will
|
|
|
|
get "junk.cgi?birthyear=1905&press=OK" appended to the path part of the
|
|
|
|
previous URL.
|
|
|
|
|
|
|
|
If the original form was seen on the page "www.hotmail.com/when/birth.html",
|
|
|
|
the second page you'll get will become
|
|
|
|
"www.hotmail.com/when/junk.cgi?birthyear=1905&press=OK".
|
|
|
|
|
|
|
|
Most search engines work this way.
|
|
|
|
|
|
|
|
To make curl do the GET form post for you, just enter the expected created
|
|
|
|
URL:
|
|
|
|
|
2010-09-15 22:43:48 +08:00
|
|
|
curl "http://www.hotmail.com/when/junk.cgi?birthyear=1905&press=OK"
|
2000-08-12 01:03:44 +08:00
|
|
|
|
|
|
|
4.2 POST
|
|
|
|
|
|
|
|
The GET method makes all input field names get displayed in the URL field of
|
|
|
|
your browser. That's generally a good thing when you want to be able to
|
|
|
|
bookmark that page with your given data, but it is an obvious disadvantage
|
|
|
|
if you entered secret information in one of the fields or if there are a
|
|
|
|
large amount of fields creating a very long and unreadable URL.
|
|
|
|
|
|
|
|
The HTTP protocol then offers the POST method. This way the client sends the
|
|
|
|
data separated from the URL and thus you won't see any of it in the URL
|
|
|
|
address field.
|
|
|
|
|
|
|
|
The form would look very similar to the previous one:
|
|
|
|
|
|
|
|
<form method="POST" action="junk.cgi">
|
|
|
|
<input type=text name="birthyear">
|
2003-11-06 16:15:04 +08:00
|
|
|
<input type=submit name=press value=" OK ">
|
2000-08-12 01:03:44 +08:00
|
|
|
</form>
|
|
|
|
|
|
|
|
And to use curl to post this form with the same data filled in as before, we
|
|
|
|
could do it like:
|
|
|
|
|
2011-01-19 20:06:36 +08:00
|
|
|
curl --data "birthyear=1905&press=%20OK%20" \
|
|
|
|
http://www.example.com/when.cgi
|
2000-08-12 01:03:44 +08:00
|
|
|
|
|
|
|
This kind of POST will use the Content-Type
|
2000-09-15 22:15:16 +08:00
|
|
|
application/x-www-form-urlencoded and is the most widely used POST kind.
|
2000-08-12 01:03:44 +08:00
|
|
|
|
2003-11-06 16:15:04 +08:00
|
|
|
The data you send to the server MUST already be properly encoded, curl will
|
|
|
|
not do that for you. For example, if you want the data to contain a space,
|
|
|
|
you need to replace that space with %20 etc. Failing to comply with this
|
|
|
|
will most likely cause your data to be received wrongly and messed up.
|
|
|
|
|
2008-05-30 05:48:15 +08:00
|
|
|
Recent curl versions can in fact url-encode POST data for you, like this:
|
|
|
|
|
2010-09-15 22:43:48 +08:00
|
|
|
curl --data-urlencode "name=I am Daniel" http://www.example.com
|
2008-05-30 05:48:15 +08:00
|
|
|
|
2004-12-10 17:45:55 +08:00
|
|
|
4.3 File Upload POST
|
2000-08-12 01:03:44 +08:00
|
|
|
|
2004-12-10 17:45:55 +08:00
|
|
|
Back in late 1995 they defined an additional way to post data over HTTP. It
|
|
|
|
is documented in the RFC 1867, why this method sometimes is referred to as
|
|
|
|
RFC1867-posting.
|
2000-08-12 01:03:44 +08:00
|
|
|
|
|
|
|
This method is mainly designed to better support file uploads. A form that
|
|
|
|
allows a user to upload a file could be written like this in HTML:
|
|
|
|
|
|
|
|
<form method="POST" enctype='multipart/form-data' action="upload.cgi">
|
|
|
|
<input type=file name=upload>
|
|
|
|
<input type=submit name=press value="OK">
|
|
|
|
</form>
|
|
|
|
|
|
|
|
This clearly shows that the Content-Type about to be sent is
|
|
|
|
multipart/form-data.
|
|
|
|
|
|
|
|
To post to a form like this with curl, you enter a command line like:
|
|
|
|
|
2010-09-15 22:43:48 +08:00
|
|
|
curl --form upload=@localfilename --form press=OK [URL]
|
2000-08-12 01:03:44 +08:00
|
|
|
|
2004-12-10 17:45:55 +08:00
|
|
|
4.4 Hidden Fields
|
2000-08-12 01:03:44 +08:00
|
|
|
|
|
|
|
A very common way for HTML based application to pass state information
|
|
|
|
between pages is to add hidden fields to the forms. Hidden fields are
|
|
|
|
already filled in, they aren't displayed to the user and they get passed
|
|
|
|
along just as all the other fields.
|
|
|
|
|
|
|
|
A similar example form with one visible field, one hidden field and one
|
|
|
|
submit button could look like:
|
|
|
|
|
|
|
|
<form method="POST" action="foobar.cgi">
|
|
|
|
<input type=text name="birthyear">
|
2000-09-15 22:15:16 +08:00
|
|
|
<input type=hidden name="person" value="daniel">
|
2000-08-12 01:03:44 +08:00
|
|
|
<input type=submit name="press" value="OK">
|
|
|
|
</form>
|
|
|
|
|
|
|
|
To post this with curl, you won't have to think about if the fields are
|
|
|
|
hidden or not. To curl they're all the same:
|
|
|
|
|
2010-09-15 22:43:48 +08:00
|
|
|
curl --data "birthyear=1905&press=OK&person=daniel" [URL]
|
2000-08-12 01:03:44 +08:00
|
|
|
|
2004-12-10 17:45:55 +08:00
|
|
|
4.5 Figure Out What A POST Looks Like
|
2001-08-20 21:22:37 +08:00
|
|
|
|
|
|
|
When you're about fill in a form and send to a server by using curl instead
|
|
|
|
of a browser, you're of course very interested in sending a POST exactly the
|
|
|
|
way your browser does.
|
|
|
|
|
|
|
|
An easy way to get to see this, is to save the HTML page with the form on
|
2001-10-31 21:42:38 +08:00
|
|
|
your local disk, modify the 'method' to a GET, and press the submit button
|
2001-08-20 21:22:37 +08:00
|
|
|
(you could also change the action URL if you want to).
|
|
|
|
|
|
|
|
You will then clearly see the data get appended to the URL, separated with a
|
|
|
|
'?'-letter as GET forms are supposed to.
|
|
|
|
|
2000-08-12 01:03:44 +08:00
|
|
|
5. PUT
|
|
|
|
|
|
|
|
The perhaps best way to upload data to a HTTP server is to use PUT. Then
|
|
|
|
again, this of course requires that someone put a program or script on the
|
|
|
|
server end that knows how to receive a HTTP PUT stream.
|
|
|
|
|
|
|
|
Put a file to a HTTP server with curl:
|
|
|
|
|
2011-01-19 20:06:36 +08:00
|
|
|
curl --upload-file uploadfile http://www.example.com/receive.cgi
|
2000-08-12 01:03:44 +08:00
|
|
|
|
2008-05-30 05:48:15 +08:00
|
|
|
6. HTTP Authentication
|
2000-08-12 01:03:44 +08:00
|
|
|
|
2008-05-30 05:48:15 +08:00
|
|
|
HTTP Authentication is the ability to tell the server your username and
|
|
|
|
password so that it can verify that you're allowed to do the request you're
|
|
|
|
doing. The Basic authentication used in HTTP (which is the type curl uses by
|
|
|
|
default) is *plain* *text* based, which means it sends username and password
|
|
|
|
only slightly obfuscated, but still fully readable by anyone that sniffs on
|
|
|
|
the network between you and the remote server.
|
2000-08-12 01:03:44 +08:00
|
|
|
|
|
|
|
To tell curl to use a user and password for authentication:
|
|
|
|
|
2011-01-19 20:06:36 +08:00
|
|
|
curl --user name:password http://www.example.com
|
2003-11-06 16:15:04 +08:00
|
|
|
|
|
|
|
The site might require a different authentication method (check the headers
|
|
|
|
returned by the server), and then --ntlm, --digest, --negotiate or even
|
|
|
|
--anyauth might be options that suit you.
|
2010-02-15 03:40:18 +08:00
|
|
|
|
2000-08-12 01:03:44 +08:00
|
|
|
Sometimes your HTTP access is only available through the use of a HTTP
|
|
|
|
proxy. This seems to be especially common at various companies. A HTTP proxy
|
|
|
|
may require its own user and password to allow the client to get through to
|
2001-10-31 21:42:38 +08:00
|
|
|
the Internet. To specify those with curl, run something like:
|
2000-08-12 01:03:44 +08:00
|
|
|
|
2010-09-15 22:43:48 +08:00
|
|
|
curl --proxy-user proxyuser:proxypassword curl.haxx.se
|
2000-08-12 01:03:44 +08:00
|
|
|
|
2003-11-06 16:15:04 +08:00
|
|
|
If your proxy requires the authentication to be done using the NTLM method,
|
2004-12-08 07:08:28 +08:00
|
|
|
use --proxy-ntlm, if it requires Digest use --proxy-digest.
|
2003-11-06 16:15:04 +08:00
|
|
|
|
2000-08-12 01:03:44 +08:00
|
|
|
If you use any one these user+password options but leave out the password
|
|
|
|
part, curl will prompt for the password interactively.
|
|
|
|
|
2004-12-08 07:08:28 +08:00
|
|
|
Do note that when a program is run, its parameters might be possible to see
|
|
|
|
when listing the running processes of the system. Thus, other users may be
|
|
|
|
able to watch your passwords if you pass them as plain command line
|
|
|
|
options. There are ways to circumvent this.
|
2000-08-12 01:03:44 +08:00
|
|
|
|
2008-05-30 05:48:15 +08:00
|
|
|
It is worth noting that while this is how HTTP Authentication works, very
|
|
|
|
many web sites will not use this concept when they provide logins etc. See
|
|
|
|
the Web Login chapter further below for more details on that.
|
|
|
|
|
2004-12-10 17:45:55 +08:00
|
|
|
7. Referer
|
2000-08-12 01:03:44 +08:00
|
|
|
|
2004-12-08 07:08:28 +08:00
|
|
|
A HTTP request may include a 'referer' field (yes it is misspelled), which
|
|
|
|
can be used to tell from which URL the client got to this particular
|
|
|
|
resource. Some programs/scripts check the referer field of requests to verify
|
|
|
|
that this wasn't arriving from an external site or an unknown page. While
|
|
|
|
this is a stupid way to check something so easily forged, many scripts still
|
|
|
|
do it. Using curl, you can put anything you want in the referer-field and
|
|
|
|
thus more easily be able to fool the server into serving your request.
|
2000-08-12 01:03:44 +08:00
|
|
|
|
|
|
|
Use curl to set the referer field with:
|
|
|
|
|
2011-01-19 20:06:36 +08:00
|
|
|
curl --referer http://www.example.come http://www.example.com
|
2000-08-12 01:03:44 +08:00
|
|
|
|
2004-12-10 17:45:55 +08:00
|
|
|
8. User Agent
|
2000-08-12 01:03:44 +08:00
|
|
|
|
|
|
|
Very similar to the referer field, all HTTP requests may set the User-Agent
|
|
|
|
field. It names what user agent (client) that is being used. Many
|
|
|
|
applications use this information to decide how to display pages. Silly web
|
|
|
|
programmers try to make different pages for users of different browsers to
|
|
|
|
make them look the best possible for their particular browsers. They usually
|
|
|
|
also do different kinds of javascript, vbscript etc.
|
|
|
|
|
2000-08-14 14:31:59 +08:00
|
|
|
At times, you will see that getting a page with curl will not return the same
|
|
|
|
page that you see when getting the page with your browser. Then you know it
|
|
|
|
is time to set the User Agent field to fool the server into thinking you're
|
|
|
|
one of those browsers.
|
2000-08-12 01:03:44 +08:00
|
|
|
|
2011-01-19 20:06:36 +08:00
|
|
|
To make curl look like Internet Explorer 5 on a Windows 2000 box:
|
2000-08-12 01:03:44 +08:00
|
|
|
|
2011-01-19 20:06:36 +08:00
|
|
|
curl --user-agent "Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)" [URL]
|
2000-08-12 01:03:44 +08:00
|
|
|
|
2011-01-19 20:06:36 +08:00
|
|
|
Or why not look like you're using Netscape 4.73 on an old Linux box:
|
2000-08-12 01:03:44 +08:00
|
|
|
|
2011-01-19 20:06:36 +08:00
|
|
|
curl --user-agent "Mozilla/4.73 [en] (X11; U; Linux 2.2.15 i686)" [URL]
|
2000-08-12 01:03:44 +08:00
|
|
|
|
2004-12-10 17:45:55 +08:00
|
|
|
9. Redirects
|
2000-08-12 01:03:44 +08:00
|
|
|
|
|
|
|
When a resource is requested from a server, the reply from the server may
|
|
|
|
include a hint about where the browser should go next to find this page, or a
|
|
|
|
new page keeping newly generated output. The header that tells the browser
|
|
|
|
to redirect is Location:.
|
|
|
|
|
|
|
|
Curl does not follow Location: headers by default, but will simply display
|
|
|
|
such pages in the same manner it display all HTTP replies. It does however
|
|
|
|
feature an option that will make it attempt to follow the Location: pointers.
|
|
|
|
|
2010-02-15 03:40:18 +08:00
|
|
|
To tell curl to follow a Location:
|
|
|
|
|
2011-01-19 20:06:36 +08:00
|
|
|
curl --location http://www.example.com
|
2000-08-12 01:03:44 +08:00
|
|
|
|
|
|
|
If you use curl to POST to a site that immediately redirects you to another
|
2010-09-15 22:43:48 +08:00
|
|
|
page, you can safely use --location (-L) and --data/--form together. Curl will
|
|
|
|
only use POST in the first request, and then revert to GET in the following
|
|
|
|
operations.
|
2000-08-12 01:03:44 +08:00
|
|
|
|
2004-12-10 17:45:55 +08:00
|
|
|
10. Cookies
|
2000-08-12 01:03:44 +08:00
|
|
|
|
|
|
|
The way the web browsers do "client side state control" is by using
|
|
|
|
cookies. Cookies are just names with associated contents. The cookies are
|
|
|
|
sent to the client by the server. The server tells the client for what path
|
|
|
|
and host name it wants the cookie sent back, and it also sends an expiration
|
|
|
|
date and a few more properties.
|
|
|
|
|
|
|
|
When a client communicates with a server with a name and path as previously
|
|
|
|
specified in a received cookie, the client sends back the cookies and their
|
|
|
|
contents to the server, unless of course they are expired.
|
|
|
|
|
2000-09-15 22:15:16 +08:00
|
|
|
Many applications and servers use this method to connect a series of requests
|
2001-10-31 21:42:38 +08:00
|
|
|
into a single logical session. To be able to use curl in such occasions, we
|
2000-09-15 22:15:16 +08:00
|
|
|
must be able to record and send back cookies the way the web application
|
|
|
|
expects them. The same way browsers deal with them.
|
2000-08-12 01:03:44 +08:00
|
|
|
|
|
|
|
The simplest way to send a few cookies to the server when getting a page with
|
|
|
|
curl is to add them on the command line like:
|
|
|
|
|
2011-01-19 20:06:36 +08:00
|
|
|
curl --cookie "name=Daniel" http://www.example.com
|
2000-08-12 01:03:44 +08:00
|
|
|
|
|
|
|
Cookies are sent as common HTTP headers. This is practical as it allows curl
|
|
|
|
to record cookies simply by recording headers. Record cookies with curl by
|
2010-09-15 22:43:48 +08:00
|
|
|
using the --dump-header (-D) option like:
|
2000-08-12 01:03:44 +08:00
|
|
|
|
2011-01-19 20:06:36 +08:00
|
|
|
curl --dump-header headers_and_cookies http://www.example.com
|
2000-08-12 01:03:44 +08:00
|
|
|
|
2010-09-15 22:43:48 +08:00
|
|
|
(Take note that the --cookie-jar option described below is a better way to
|
|
|
|
store cookies.)
|
2003-11-06 16:15:04 +08:00
|
|
|
|
2000-08-12 01:03:44 +08:00
|
|
|
Curl has a full blown cookie parsing engine built-in that comes to use if you
|
|
|
|
want to reconnect to a server and use cookies that were stored from a
|
|
|
|
previous connection (or handicrafted manually to fool the server into
|
|
|
|
believing you had a previous connection). To use previously stored cookies,
|
|
|
|
you run curl like:
|
|
|
|
|
2011-01-19 20:06:36 +08:00
|
|
|
curl --cookie stored_cookies_in_file http://www.example.com
|
2000-08-12 01:03:44 +08:00
|
|
|
|
2010-09-15 22:43:48 +08:00
|
|
|
Curl's "cookie engine" gets enabled when you use the --cookie option. If you
|
|
|
|
only want curl to understand received cookies, use --cookie with a file that
|
2011-01-19 20:06:36 +08:00
|
|
|
doesn't exist. Example, if you want to let curl understand cookies from a
|
|
|
|
page and follow a location (and thus possibly send back cookies it received),
|
|
|
|
you can invoke it like:
|
2001-08-20 21:22:37 +08:00
|
|
|
|
2011-01-19 20:06:36 +08:00
|
|
|
curl --cookie nada --location http://www.example.com
|
2001-08-20 21:22:37 +08:00
|
|
|
|
2001-10-31 21:42:38 +08:00
|
|
|
Curl has the ability to read and write cookie files that use the same file
|
|
|
|
format that Netscape and Mozilla do. It is a convenient way to share cookies
|
2011-01-19 20:06:36 +08:00
|
|
|
between browsers and automatic scripts. The --cookie (-b) switch
|
|
|
|
automatically detects if a given file is such a cookie file and parses it,
|
|
|
|
and by using the --cookie-jar (-c) option you'll make curl write a new cookie
|
|
|
|
file at the end of an operation:
|
2001-10-31 21:42:38 +08:00
|
|
|
|
2011-01-19 20:06:36 +08:00
|
|
|
curl --cookie cookies.txt --cookie-jar newcookies.txt \
|
|
|
|
http://www.example.com
|
2001-10-31 21:42:38 +08:00
|
|
|
|
2000-08-12 01:03:44 +08:00
|
|
|
11. HTTPS
|
|
|
|
|
|
|
|
There are a few ways to do secure HTTP transfers. The by far most common
|
|
|
|
protocol for doing this is what is generally known as HTTPS, HTTP over
|
2000-09-15 22:15:16 +08:00
|
|
|
SSL. SSL encrypts all the data that is sent and received over the network and
|
2000-08-12 01:03:44 +08:00
|
|
|
thus makes it harder for attackers to spy on sensitive information.
|
|
|
|
|
|
|
|
SSL (or TLS as the latest version of the standard is called) offers a
|
|
|
|
truckload of advanced features to allow all those encryptions and key
|
2000-09-15 22:15:16 +08:00
|
|
|
infrastructure mechanisms encrypted HTTP requires.
|
2000-08-12 01:03:44 +08:00
|
|
|
|
2000-09-15 22:15:16 +08:00
|
|
|
Curl supports encrypted fetches thanks to the freely available OpenSSL
|
|
|
|
libraries. To get a page from a HTTPS server, simply run curl like:
|
2000-08-12 01:03:44 +08:00
|
|
|
|
2011-01-19 20:06:36 +08:00
|
|
|
curl https://secure.example.com
|
2000-08-12 01:03:44 +08:00
|
|
|
|
2004-12-10 17:45:55 +08:00
|
|
|
11.1 Certificates
|
2000-08-12 01:03:44 +08:00
|
|
|
|
|
|
|
In the HTTPS world, you use certificates to validate that you are the one
|
2008-08-27 16:01:59 +08:00
|
|
|
you claim to be, as an addition to normal passwords. Curl supports client-
|
|
|
|
side certificates. All certificates are locked with a pass phrase, which you
|
|
|
|
need to enter before the certificate can be used by curl. The pass phrase
|
|
|
|
can be specified on the command line or if not, entered interactively when
|
|
|
|
curl queries for it. Use a certificate with curl on a HTTPS server like:
|
2000-08-12 01:03:44 +08:00
|
|
|
|
2011-01-19 20:06:36 +08:00
|
|
|
curl --cert mycert.pem https://secure.example.com
|
2000-08-12 01:03:44 +08:00
|
|
|
|
2003-11-06 16:15:04 +08:00
|
|
|
curl also tries to verify that the server is who it claims to be, by
|
2004-12-08 07:08:28 +08:00
|
|
|
verifying the server's certificate against a locally stored CA cert
|
|
|
|
bundle. Failing the verification will cause curl to deny the connection. You
|
2010-09-15 22:43:48 +08:00
|
|
|
must then use --insecure (-k) in case you want to tell curl to ignore that
|
|
|
|
the server can't be verified.
|
2004-12-08 07:08:28 +08:00
|
|
|
|
|
|
|
More about server certificate verification and ca cert bundles can be read
|
|
|
|
in the SSLCERTS document, available online here:
|
|
|
|
|
|
|
|
http://curl.haxx.se/docs/sslcerts.html
|
2003-11-06 16:15:04 +08:00
|
|
|
|
2004-12-10 17:45:55 +08:00
|
|
|
12. Custom Request Elements
|
|
|
|
|
|
|
|
Doing fancy stuff, you may need to add or change elements of a single curl
|
|
|
|
request.
|
|
|
|
|
|
|
|
For example, you can change the POST request to a PROPFIND and send the data
|
|
|
|
as "Content-Type: text/xml" (instead of the default Content-Type) like this:
|
|
|
|
|
2011-01-19 20:06:36 +08:00
|
|
|
curl --data "<xml>" --header "Content-Type: text/xml" \
|
|
|
|
--request PROPFIND url.com
|
2004-12-10 17:45:55 +08:00
|
|
|
|
|
|
|
You can delete a default header by providing one without content. Like you
|
|
|
|
can ruin the request by chopping off the Host: header:
|
|
|
|
|
2011-01-19 20:06:36 +08:00
|
|
|
curl --header "Host:" http://www.example.com
|
2004-12-10 17:45:55 +08:00
|
|
|
|
|
|
|
You can add headers the same way. Your server may want a "Destination:"
|
|
|
|
header, and you can add it:
|
|
|
|
|
2011-01-19 20:06:36 +08:00
|
|
|
curl --header "Destination: http://nowhere" http://example.com
|
2004-12-10 17:45:55 +08:00
|
|
|
|
2008-05-30 05:48:15 +08:00
|
|
|
13. Web Login
|
|
|
|
|
|
|
|
While not strictly just HTTP related, it still cause a lot of people problems
|
|
|
|
so here's the executive run-down of how the vast majority of all login forms
|
|
|
|
work and how to login to them using curl.
|
|
|
|
|
|
|
|
It can also be noted that to do this properly in an automated fashion, you
|
|
|
|
will most certainly need to script things and do multiple curl invokes etc.
|
|
|
|
|
|
|
|
First, servers mostly use cookies to track the logged-in status of the
|
|
|
|
client, so you will need to capture the cookies you receive in the
|
|
|
|
responses. Then, many sites also set a special cookie on the login page (to
|
|
|
|
make sure you got there through their login page) so you should make a habit
|
|
|
|
of first getting the login-form page to capture the cookies set there.
|
|
|
|
|
|
|
|
Some web-based login systems features various amounts of javascript, and
|
|
|
|
sometimes they use such code to set or modify cookie contents. Possibly they
|
|
|
|
do that to prevent programmed logins, like this manual describes how to...
|
|
|
|
Anyway, if reading the code isn't enough to let you repeat the behavior
|
|
|
|
manually, capturing the HTTP requests done by your browers and analyzing the
|
|
|
|
sent cookies is usually a working method to work out how to shortcut the
|
|
|
|
javascript need.
|
|
|
|
|
|
|
|
In the actual <form> tag for the login, lots of sites fill-in random/session
|
|
|
|
or otherwise secretly generated hidden tags and you may need to first capture
|
|
|
|
the HTML code for the login form and extract all the hidden fields to be able
|
|
|
|
to do a proper login POST. Remember that the contents need to be URL encoded
|
|
|
|
when sent in a normal POST.
|
|
|
|
|
|
|
|
14. Debug
|
2004-12-10 17:45:55 +08:00
|
|
|
|
|
|
|
Many times when you run curl on a site, you'll notice that the site doesn't
|
|
|
|
seem to respond the same way to your curl requests as it does to your
|
|
|
|
browser's.
|
|
|
|
|
|
|
|
Then you need to start making your curl requests more similar to your
|
|
|
|
browser's requests:
|
|
|
|
|
|
|
|
* Use the --trace-ascii option to store fully detailed logs of the requests
|
|
|
|
for easier analyzing and better understanding
|
|
|
|
|
2010-09-15 22:43:48 +08:00
|
|
|
* Make sure you check for and use cookies when needed (both reading with
|
|
|
|
--cookie and writing with --cookie-jar)
|
2004-12-10 17:45:55 +08:00
|
|
|
|
|
|
|
* Set user-agent to one like a recent popular browser does
|
|
|
|
|
|
|
|
* Set referer like it is set by the browser
|
|
|
|
|
|
|
|
* If you use POST, make sure you send all the fields and in the same order as
|
|
|
|
the browser does it. (See chapter 4.5 above)
|
|
|
|
|
|
|
|
A very good helper to make sure you do this right, is the LiveHTTPHeader tool
|
|
|
|
that lets you view all headers you send and receive with Mozilla/Firefox
|
|
|
|
(even when using HTTPS).
|
|
|
|
|
|
|
|
A more raw approach is to capture the HTTP traffic on the network with tools
|
|
|
|
such as ethereal or tcpdump and check what headers that were sent and
|
|
|
|
received by the browser. (HTTPS makes this technique inefficient.)
|
|
|
|
|
2008-05-30 05:48:15 +08:00
|
|
|
15. References
|
2000-08-12 01:03:44 +08:00
|
|
|
|
|
|
|
RFC 2616 is a must to read if you want in-depth understanding of the HTTP
|
|
|
|
protocol.
|
|
|
|
|
2011-01-19 20:06:36 +08:00
|
|
|
RFC 3986 explains the URL syntax.
|
2000-08-12 01:03:44 +08:00
|
|
|
|
|
|
|
RFC 2109 defines how cookies are supposed to work.
|
|
|
|
|
2001-10-31 21:42:38 +08:00
|
|
|
RFC 1867 defines the HTTP post upload format.
|
|
|
|
|
2000-08-12 01:03:44 +08:00
|
|
|
http://curl.haxx.se is the home of the cURL project
|