February 2018
Intermediate to advanced
378 pages
10h 14m
English
A rectifier is a piecewise linear function, which you hardly ever meet outside of the NNs context. This class of function was designed specifically to mitigate the problems and limitations of traditional step-like activation functions. A rectifier applies a simple thresholding: max(0, x). A neuron uses a rectifier is known as a rectified linear unit (ReLU).
Unlike sigmoids, a rectifier doesn't saturate at the upper end. This helps the neuron to tell apart a poor prediction from a very poor prediction, and update weights accordingly even in such a difficult situation. ReLU is also very cheap computationally: unlike sigmoids, which require exponentials, ReLU can be implemented as a thresholding operation. ...
Read now
Unlock full access