Chapter 5. Machine Language Understanding
Now that you better understand a variety of language model architectures and associated tasks, you are probably thinking, “Time to collect data and get to training!” Unfortunately, this is not the case. As we have noted, LMs learn about syntax and structure of language and their associations, but not the underlying meaning. So how would machines read and comprehend? Let’s use an observation of human behavior to explore this concept.
Imagine that you are watching a young child who is just now learning to speak. To learn how to speak, the child must practice. This requires them to hear words, say words, read words, and engage in conversation. They, of course, will receive guidance from you to inform the direction of their learning.
Now imagine that you are not there to help direct them. Instead, they are taken from you and are raised in a mall for one year. There they learn language within the context of the mall, where they pick up the vocabulary to describe their sensory inputs, like what they see and smell. They come back to you, now capable of speaking quite well. Unfortunately, they seem to have little insight into what their words mean. They had picked up only how to string words together in the context of a mall, and now they are home.
You ask the child how they are doing, to which you receive the response, “What’s good?” Puzzled, you say you are doing well; however, you are hungry, so you’d like their input on what they want ...
Get Language Models in Plain English now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.