Early stopping

As the training for a large neural network proceeds, training errors decrease steadily over time, but as shown in the following figure, validation set errors starts to increase beyond some iterations:

Early stopping: training versus validation error

If the training is stopped at the point where the validation errors start increasing, we can have a model with better generalization performance. This is called early stopping. It's controlled by a patience hyperparameter, which sets the number of times to observe increasing validation set error before training is aborted. Early stopping can be used either alone or in conjunction ...

Get Hands-On Transfer Learning with Python now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.