Chapter 12. Conclusion

This brings us to the end of our journey together. Over the course of 11 chapters, we introduced the origins of natural language processing and retraced how the field has advanced over the past decade. We delved into the nitty-gritty details of the space, including preprocessing and tokenization and several types of word embeddings, such Word2Vec, GloVe, and fastText.

We covered everything from vanilla recurrent nets to gated variants such as LSTM and GRUs. And, we explained how attention mechanisms, contextualized word embeddings, and Transformers helped shatter previous performance records. Most importantly, we used large, pretrained language models to perform transfer learning and fine-tune models and discussed how to productionize the models using various tools of the trade.

Instead of getting bogged down in theory, we focused mostly on applying state-of-the-art NLP techniques to solve real-world problems. We hope this helped you build greater intuition about NLP, how it works, and how to apply it well.

By now it should be clear that getting up and running with NLP is relatively easy, partly thanks to the open sourcing of large, pretrained language models by research teams at Google, Facebook, OpenAI, and others. Companies such as spaCy, Hugging Face, AllenNLP, Amazon, Microsoft, and Google have introduced great tooling for NLP, too, making it less painful to develop NLP models of your own from scratch or fine-tune existing models.

Ten Final Lessons ...

Get Applied Natural Language Processing in the Enterprise now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.