In order to scrape a website, we first need to download its web pages containing the data of interest—a process known as crawling. There are a number of approaches that can be used to crawl a website, and the appropriate choice will depend on the structure of the target website. This chapter will explore how to download web pages safely, and then introduce the following three common approaches to crawling a website:
To crawl web pages, we first need to download them. Here is a simple Python script that uses Python's
urllib2 module to download a URL:
import urllib2 def download(url): return urllib2.urlopen(url).read() ...