April 2015
Intermediate to advanced
1062 pages
40h 35m
English
The focus of this chapter is to introduce the stochastic gradient descent family of online/adaptive algorithms in the framework of the squared error loss function. The gradient descent approach to optimization is presented and the stochastic approximation method is discussed. Then, the LMS algorithm and its offsprings, such as the APA and the NLMS are introduced. Finally, distributed learning is discussed with an emphasis to distributed versions of the LMS.
Keywords
Affine projection algorithm
Distributed learning
Diffusion LMS
Gradient descent method
Least-mean-squares LMS adaptive algorithm
Method of stochastic approximation
Robbins-Monro algorithm
Steepest ...