Selecting optimal epochs using dropout and early stopping

To avoid overfitting, we can use two techniques. The first is adding a dropout layer. The dropout layer will remove a subset of the layer output. This makes the data a little different at every iteration so that the model generalizes better and doesn't fit the solution too specifically to the training data. In the preceding code, we add the dropout layer after the pooling layer:

model <- keras_model_sequential()model %>%  layer_conv_2d(filters = 128, kernel_size = c(7,7), input_shape = c(28,28,1), padding = "same") %>%  layer_activation_leaky_relu() %>%   layer_max_pooling_2d(pool_size = c(2, 2)) %>%  layer_dropout(rate = 0.2) %>% layer_conv_2d(filters = 64, kernel_size = c(7,7), padding ...

Get Hands-On Deep Learning with R now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.