Running website scraping locally is fine for do-once tasks and small amounts of data, where you can easily trigger the crawl manually.
However, if you want reoccurring tasks and automatic scheduling, you should think about other solutions such as deploying your spiders somewhere into the cloud or a bought server slot.
In this chapter we will look at the virtual network of servers, the cloud, and what options you have if you want to use website scraping in the cloud. I’ll focus on Scrapy because it is the tool for website scraping and there are services provided and ...