Chapter 6. Heavyweight Scraping with Scrapy

As your scraping goals get more ambitious, hacking solutions with BeautifulSoup and requests can get very messy very fast. Managing the scraped data as requests spawn more requests gets tricky, and if your requests are being made synchronously, things start to slow down rapidly. A whole load of problems you probably hadn’t anticipated start to make themselves known. It’s at this point that you want to turn to a powerful, robust library that solves all these problems and more. And that’s where Scrapy comes in.

Where BeautifulSoup is a very handy little penknife for fast and dirty scraping, Scrapy is a Python library that can do large-scale data scrapes with ease. It has all the things you’d expect, like built-in caching (with expiration times), asynchronous requests via Python’s Twisted web framework, User-Agent randomization, and a whole lot more. The price for all this power is a fairly steep learning curve, which this chapter is intended to smooth, using a simple example. I think Scrapy is a powerful addition to any dataviz toolkit and really opens up possibilities for web data collection, but if you don’t have any need for heavyweight scraping fu right now, it’s fine to assume we’ve collected our Nobel Prize data and proceed to Part III. Otherwise, let’s buckle our seat belts and see what a real scraping engine can do.

In “Scraping Data”, we managed to scrape a dataset containing all the Nobel Prize winners by name, year, and category. ...

Get Data Visualization with Python and JavaScript now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.