Chapter 15. Transformers and transformers
With the paper “Attention Is All You Need” by Ashish Vaswani et al. in 2017, the field of AI was changed forever. While the abstract of the paper indicates something lightweight and simple—an evolution of the architecture of convolutions and recurrence (see Chapters 4 through 9 of this book)—the impact of the work was, if you’ll forgive the pun, transformative. It utterly revolutionized AI, beginning with NLP. Despite the authors’ claim of the simplicity of the approach, implementing it in code is and was inherently complex. At its core was a new approach to ML architecture: Transformers (which we capitalize to indicate that we’re referring to them as a concept).
In this chapter we’ll explore the ideas behind Transformers at a high level, demonstrating the three main architectures: encoder, decoder and encoder-decoder. Please note that we will just be exploring at a very high level, giving an overview of how these architectures work. To go deep into these would require several books, not just a single chapter!
We’ll then explore transformers, which we lowercase to indicate that they are the APIs and libraries from Hugging Face that are designed to make using Transformer-based models easy to use. Before transformers, you had to read the papers and figure out how to implement the details for yourself for the most part. So, the Hugging Face transformers library has widened access to models created using the Transformer architecture and has ...