In the last chapter, we briefly mentioned a problem that can occur by keeping a constant learning rate during training. As our model starts to learn, it is very likely that our initial learning rate will become too big for it to continue learning. The gradient descent updates will start overshooting or circling around our minimum; as a result, the loss function will not decrease in value. To solve this issue, we can, from time to time, decrease the value of the learning rate. This process is called learning rate scheduling, and there are several popular approaches.
The first method involves reducing the learning rate at fixed time steps during training, such as when training is 33% and 66% complete. Normally, you ...