November 2019
Intermediate to advanced
304 pages
8h 40m
English
The ReLU activation function is non-linear, hence, the backpropagation of errors can easily be performed. Backpropagation is the backbone of neural networks. This is the learning algorithm that computes gradient descent with respect to weights across neurons. The following are ReLU variations currently supported in DL4J:
public static final Activation RELU
public static final Activation RELU6
public static final Activation RRELU
public static final Activation THRESHOLDEDRELU
There are a few more implementations, such ...