Introducing the gradient
In mathematical terms, a gradient represents the partial derivative that's calculated on a given point in the n-dimensional space; it also represents the tangent line (slope) of the point that's being considered.
The gradient is used in machine learning as a cost function to be minimized in order to reduce the prediction errors that are produced by the algorithms. This consists of minimizing the difference between the value estimated by the algorithm and the observed value.
The minimization method that's used is known as gradient descent, which is a method of optimizing the combination of weights to be assigned to the input data in order to obtain the minimum difference between the values estimated and the values ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access