April 2018
Beginner to intermediate
282 pages
6h 52m
English
Researchers have found out that a neural network using Rectified Linear Unit (ReLU) function, trains faster than other non-linear functions like sigmoid and tanh without a significant drop in accuracy. So, the ReLU function is one of the most important activation functions. It gives an output of x, if x is positive, and O otherwise.
It is defined as the following:
A(x) = max (0,x)
The ReLU function is as shown in the following figure:

Next, we discuss the tanh function, which is very similar ...