9.1 THE NORMALIZED LEAST MEAN-SQUARE ALGORITHM

Consider the conventional least mean-square (LMS) algorithm with the fixed step-size parameter µ replaced with a time-varying variable µ(*n*) as follows (we substitute 2µ with µ for simplicity):

$w(n+1)=w(n)+\text{\mu}(n)e(n)x(n)$ |
(9.1) |

Next, define *a posteriori* error,

*e*

_{ps}(

*n*), as

${e}_{\text{p}s}(n)=d(n)-{w}^{T}(n+1)x(n)$ |
(9.2) |

Substituting (9.1) in (9.2), and taking into consideration the error equation *e*(*n*) = *d*(*n*) − **w**^{T} (*n*)* x* (

*n*), we obtain

${e}_{ps}(n)=\left[1-\text{\mu}(n){x}^{T}(n)x(n)\right]e(n)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{x}^{T}(n)x(n)={\Vert x(n)\Vert}^{2}={\displaystyle \sum _{i=0}^{M-1}|}x(n-i){|}^{2}$ |
(9.3) |

Minimizing *e*_{ps} (*n*) with respect to µ(*n*) results in (see Problem 9.1.1)

$\text{\mu (n)}=\frac{1}{{\Vert x(n)\Vert}^{2}}$ |
(9.4) |

Get *Adaptive Filtering* now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.