Chapter 6. Introducing Imitation Learning
In this chapter, we’re going to look at imitation learning (IL). Imitation learning is slightly different from other forms of machine learning because the intent of IL isn’t to achieve a specific goal. Instead, the intent is to copy the behavior of something else. That something else? Probably a human.
To explore IL, we’ll be making another ball-based agent that can roll around, and we’ll be training it to seek and pick up a coin (a classic video game–style pickup). But instead of training it to do what we want by reinforcing the behavior using reward signals, we’ll train it using our own human brains.
This means that, initially, we’ll be moving the agent around ourselves, using the keyboard, just like when we’ve used the heuristic behavior to control agents in previous chapters. The difference is that while we drive the agent around this time, ML-Agents will be watching us, and once we’ve finished, we’ll use IL to let the agent work out how to copy our behavior.
IL not only lets you create more humanlike behaviors, it can also be used to essentially jump-start training. Some tasks have very high initial learning curves, and training to get over these early hurdles can be quite slow. If a human can show the agent how to do a task, the agent can use that as guidance when getting started and then optimize the approach from there. Luckily for us, humans are pretty good at plenty of things, and IL lets you take advantage of this. A disadvantage ...