Stochastic gradient descent (SGD), in contrast to batch gradient descent, performs a parameter update foreachtraining example, x(i) and label y(i):
Θ = Θ - η∇Θj(Θ, x(i), y(i))
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month, and much more.