Chapter 4 Variance Reduction Methods
Stochastic approximation is one of the effective approach to deal with the large-scale machine learning problems and the recent research has focused on reduction of variance, caused by the noisy approximations of the gradients. In this chapter, we have proposed novel variants of SAAG-I and II (Stochastic Average Adjusted Gradient) Chauhan et el. (2017) [23], called SAAG-III and IV, respectively. Unlike SAAG-I, starting point is set to average of previous epoch in SAAG-III, and unlike SAAG-II, the snap point and starting point are set to average and last iterate of previous epoch in SAAG-IV, respectively. To determine the step size, we use Stochastic Backtracking-Armijo line ...
Get Stochastic Optimization for Large-scale Machine Learning now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.