December 2018
Beginner to intermediate
500 pages
12h 10m
English
Our web crawler is quite performant—using CSS selectors is very efficient. But, as it is right now, if we end up with the same Wikipedia article in different game sessions, we'll have to fetch it, parse it, and extract its contents multiple times. This is a time-consuming and resource-expensive operation—and, more importantly, one we can easily eliminate if we just store the article information once we fetch it the first time.
We could use Julia's serialization features, which we've already seen, but since we're building a fairly complex game, we would benefit from adding a database backend. Besides storing articles' data, we could also persist information about players, scores, preferences, and whatnot. ...
Read now
Unlock full access