Chapter 17: Gradient Descent

In this chapter, we will understand the concept of gradient descent. Every machine learning or deep learning algorithm is an optimization exercise, that is, the algorithm tries to reduce the difference between predicted and actual, which can also be called a loss. Loss corresponds to the difference between predicted minus actual in one iteration. A cost function is the average of the loss over multiple iterations.

The algorithm attempts to minimize this cost or losses because the objective of any algorithm is to have a result that is as close to the actual as possible. We want the forecast accuracy to be higher, which means the cost function should be minimal, and the loss should be very low. This translates to the ...

Get De-Mystifying Math and Stats for Machine Learning now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.