This is probably the most important chapter for everyone starting with Scrapy. You just learned the basic methodology of developing spiders: UR2IM. You learned how to define custom
Items that fit our needs, use
ItemLoaders, XPath expressions and processors to load
Items, and how to
yield Requests. We used
Requests to navigate horizontally across multiple index pages and vertically towards listing pages to extract
Items. Finally, we saw how
Rules can be used to create very powerful spiders with even less lines of codes. Please feel free to read this chapter as many times as you want to get a deeper understanding of the concepts, and of course, use it as a reference as you develop your own spiders.
We just got some information ...