Gradient-based learning

In the previous chapter, we primarily discussed the single hidden layer perceptron model or the simple neural networks, in that chapter we also introduced the concept of gradient descent. Gradient descent, as applicable to the Deep Neural Network, essentially means we define the weights and biases for the neuron connections so as to reduce the value of the cost function. The network is initialized to a random state (random weights and bias values) and the initial cost value is calculated. The weights are adjusted with the help of the derivative of cost with respect to weights on the Deep Neural Network.

In mathematics, the derivative is a way to show the rate of change, that is, the amount by which a function is changing ...

Get Artificial Intelligence for Big Data now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.