As a practical example, we'll create a simple web crawler that scans a website you open on it, follows all the links that lead to the same website but to other pages, and downloads all of the images it'll find.
To do this, we'll need two functions: a function that scans a website for links that lead to other pages to build a set of pages to visit, and a function that scans a page for images and downloads them.
To make it quicker, we'll download images in two threads. These two threads should not interfere with each other, so don't scan pages if another thread has already scanned them, and don't download images that are already downloaded.
So, a set with downloaded images and scanned web pages will be a shared resource for ...