Chapter 7. Transformers
In the previous chapter, we covered RNNs, the modeling architecture in vogue in NLP until the Transformer architecture gained prominence.
Transformers are the workhorse of modern NLP. The original architecture, first proposed in 2017, has taken the (deep learning) world by storm. Since then, NLP literature has been inundated with all sorts of new architectures that are broadly classified into either Sesame Street characters or words that end with “-former.”1
In this chapter, we’ll look at that very architecture—the transformer—in detail. We’ll analyze the core innovations and explore a hot new category of neural network layers: the attention mechanism.
Building a Transformer from Scratch
In Chapters 2 and 3, we explored how to use transformers in practice and how to leverage pretrained transformers to solve complex NLP problems. Now we’re going to take a deep dive into the architecture itself and learn how transformers work from first principles.
What does “first principles” mean? Well, for starters, it means we’re not allowed to use the Hugging Face Transformers library. We’ve raved about it plenty in this book already, so it’s about time we take a break from that and see how things actually work under the hood. For this chapter, we’re going to be using raw PyTorch instead.
Note
When deploying models in production, especially on edge devices, you may have to go to an even lower level of abstraction. The tooling around edge device inference, as we mentioned ...
Get Applied Natural Language Processing in the Enterprise now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.