The previous chapter presented some techniques and patterns for building large, scalable, and (most important!) maintainable web crawlers. Although this is easy enough to do by hand, many libraries, frameworks, and even GUI-based tools will do this for you, or at least try to make your life a little easier.
This chapter introduces one of the best frameworks for developing crawlers: Scrapy. During the writing of the first edition of Web Scraping with Python, Scrapy had not yet been released for Python 3.x, and its inclusion in the text was limited to a single section. Since then, the library has been updated to support Python 3.3+, additional features have been added, and I’m excited to expand this section into its own chapter.
One of the challenges of writing web crawlers is that you’re often performing the same tasks again and again: find all links on a page, evaluate the difference between internal and external links, go to new pages. These basic patterns are useful to know and to be able to write from scratch, but the Scrapy library handles many of these details for you.
Of course, Scrapy isn’t a mind reader. You still need to define page templates, give it locations to start scraping from, and define URL patterns for the pages that you’re looking for. But in these cases, it provides a clean framework to keep your code organized.
Scrapy offers the tool for download from its website, as well as instructions for installing Scrapy with third-party ...