Name
curl — stdin stdout - file -- opt --help --version
Synopsis
curl [options] [URLs]
The curl command hits a URL
and downloads the data to a file or standard output. It’s great for
capturing web pages or downloading files. For example, let’s capture
the Yahoo home page:
➜ curl http://www.yahoo.com > mypage.htmlwhich is saved to a file mypage.html in the current directory. If you provide multiple URLs, they’ll all be appended to mypage.html.
Perhaps the most useful feature of curl is its ability to download files
without needing a web browser:
➜ curl -O http://www.example.com/files/manual.pdfYou can write shell scripts to download sets of files if you know their names. (See Programming with Shell Scripts for details.) This line downloads files 1.mpeg through 3.mpeg from example.com:
➜ for i in 1 2 3; do \
curl -o $i.mpeg http://example.com/$i.mpeg; donecurl can resume a large
download if it gets interrupted in the middle, say, due to a network
failure: just run curl -C with the
same target URL in the following way:
➜curl -o myfile http://example.com/some_big_fileTransfer gets interrupted. Now run: ➜cat myfile | curl -C - -o myfile \ http://example.com/some_big_file
This sends the partial myfile to curl for analysis, then resumes the
download. curl has over 100
options, so we’ll cover just a few important ones.
Useful options
| Write the retrieved data to the given file. Otherwise it’s written to standard output. |
| Write the retrieved data to a file with the same name as the ... |
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access