While the initial LDA implementations can be slow, which limits their use to small document collections, modern algorithms work well with very large collections of data. Following the documentation of gensim, we are going to build a topic model for the whole of the English-language Wikipedia. This takes hours, but can be done with just a laptop! With a cluster of machines, we can make it go much faster, but we will look at that sort of processing environment in a later chapter.
First, we download the whole Wikipedia dump from http://dumps.wikimedia.org. This is a large file (currently over 14 GB), so it may take a while, unless your internet connection is very fast. Then, we will index it with a gensim tool: ...