Early stopping of network training

When training a network, we specify the number of epochs we need in advance, without knowing how many epochs will actually be needed. If we specify the number of epochs to be too few compared to what is actually required, we may have to train the network again by specifying more epochs. On the other hand, if we specify too many more epochs than what are actually needed, then this may lead to an overfitting situation and we may have to retrain the network by reducing the number of epochs. This trial and error approach can be very time-consuming for applications where each epoch takes a long time to complete. In such situations, we can make use of callbacks that can help stop the network training at a suitable ...

Get Advanced Deep Learning with R now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.