Chapter 12. Spiders

So far we have focused on the mechanics of getting and parsing data off the Web, just a page here and a page there, without much attention to the ramifications. In this section, we consider issues that arise from writing programs that send more than a few requests to given web sites. Then we move on to how to writing recursive web user agents, or spiders. With these skills, you’ll be able to write programs that automatically navigate web sites, from simple link checkers to powerful bulk-download tools.

Types of Web-Querying Programs

Let’s say your boss comes to you and says “I need you to write a spider.” What does he mean by “spider”? Is he talking about the simple one-page screen scrapers we wrote in earlier chapters? Or does he want to extract many pages from a single server? Or maybe he wants you to write a new Google, which attempts to find and download every page on the Web. Roughly speaking, there are four kinds of programs that make requests to web servers:

Type One Requester

This program requests a couple items from a server, knowing ahead of time the URL of each. An example of this is our program in Chapter 7 that requested just the front page of the BBC News web site.

Type Two Requester

This program requests a few items from a server, then requests the pages to which those link (or possibly just a subset of those). An example of this is the program we alluded to in Chapter 11 that would download the front page of the New York Times web site, then downloaded ...

Get Perl & LWP now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.