Implementing dropout to avoid overfitting

Dropout is defined in the network architecture after the activation layers, and it randomly sets activations to zero. In other words, dropout randomly deletes parts of the neural network, which allows us to prevent overfitting. We can't overfit exactly to our training data when we're consistently throwing away information learned along the way. This allows our neural network to learn to generalize better.

In MXNet, dropout can be easily defined as part of network architecture using the mx.symbol.Dropout function. For example, the following code defines dropouts post the first ReLU activation (act1) and second ReLU activation (act2):

dropout1 <- mx.symbol.Dropout(data = act1, p = 0.5)dropout2 <- mx.symbol.Dropout(data ...

Get Advanced Machine Learning with R now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.