May 2018
Beginner
490 pages
13h 16m
English
Activation functions provide useful ways to influence the transformation of the input data weight bias calculations. Their output will change the course of a classification, a prediction, or whatever goal the network was built for. This model applies rectified linear unit (ReLU), as shown in the following code:
classifier.add(..., activation = 'relu'))
ReLU activation functions apply variations of the following function to an input value:
f(x) = max{0, x}
The function returns 0 for negative values; it returns the positive values as x; it returns 0 for 0 values.
ReLU appears to be easier to optimize. Half of the domain of the function will return zeros. This means that when you provide positive values, the derivative will always be 1. ...
Read now
Unlock full access