Chapter 2. Transformers and Transfer Learning

Now that you’ve been introduced to the field of natural language processing, there’s something important you need to understand. It’s not actually a very long journey from where you start to state of the art.

Eventually, we will return to the basics, discuss the fundamentals, and understand all the details, of course. But we’re going to show you the promised land before we venture on the long and hard journey to get there.

One of the most important ideas to implement if you want to get deep learning working in the real world is transfer learning, which is the process of taking a model that has already been trained on another dataset and fine-tuning it to fit your new dataset. For example, if you’re training a language model to generate compelling short stories in the style of Hemingway, you could fine-tune a model trained on a wide variety of books instead of training on just the text samples of Hemingway, of which there may not be many.

A nice analogy in object-oriented programming is the concept of inheritance in classes. Suppose we’re making some sort of zoo management video game, where each animal is represented by a class. The animals have properties like weight and height, as well as functions like eat and sleep. In theory, we could just create a new class for each animal and replicate those shared functions, but in practice, we usually refactor our code so that we have a superclass for a generic animal and a subclass for each ...

Get Applied Natural Language Processing in the Enterprise now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.