Now that we've seen the scaffolding, let's deep dive into the actual logic (if it looks intimidating, don't worry; we'll go through it together). Within the script, this logic lies after the imports and before the parsing (before the if __name__ clause):
def scrape(url, format_, type_): try: page = requests.get(url) except requests.RequestException as err: print(str(err)) else: soup = BeautifulSoup(page.content, 'html.parser') images = _fetch_images(soup, url) images = _filter_images(images, type_) _save(images, format_)
Let's start with the scrape function. The first thing it does is fetch the page at the given url argument. Whatever error may happen while doing this, we trap it in RequestException (err) and print it. ...