The preceding chapters have been about getting things from the Web. But once you get a file, you have to process it. If you get a GIF, you’ll use some module or external program that reads GIFs and likewise if you get a PNG, an RSS file, an MP3, or whatever. However, most of the interesting processable information on the Web is in HTML, so much of the rest of this book will focus on getting information out of HTML specifically.
In this chapter, we will use a rudimentary approach to processing HTML source: Perl regular expressions. This technique is powerful and most web sites can be mined in this fashion. We present the techniques of using regular expressions to extract data and show you how to debug those regular expressions. Examples from Amazon, the O’Reilly Network, Netscape bookmark files, and the Weather Underground web site demonstrate the techniques.
Suppose we want to extract information from an Amazon book page. The first
problem is getting the HTML. Browsing Amazon shows that the URL for a
book page is
is the book’s unique International Standard Book Number. So to fetch the
Perl Cookbook’s page, for example:
#!/usr/bin/perl -w use strict; use LWP::Simple; my $html = get("http://www.amazon.com/exec/obidos/ASIN/1565922433") or die "Couldn't fetch the Perl Cookbook's page.";
The relevant piece of HTML looks like this:
<br clear="left"> ...