Common Crawl

Common Crawl (http://commoncrawl.org/) is a repository of data crawled from the Internet over the last seven years. It is extremely large and, what is more is, it is available for everyone to download and analyze. 

Of course, we will not be able to use all of it: even a small fraction is so large that it requires a big and powerful cluster for processing it. In this chapter, will take a few archives from the end of 2016, and extract the text ting TF-IDF.

Downloading the data is not complex and you can find the instructions at http://commoncrawl.org/the-data/get-started/. The data is already available in the S3 storage, so AWS users can access it easily. In this chapter, however, we will download a part of Common Crawl via HTTP ...

Get Mastering Java for Data Science now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.