## With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

No credit card required

### 5.13.3 Convergence and Steady-State Performance: Some Highlights

In this subsection, we will summarize some findings concerning the performance analysis of the DiLMS. We will not give proofs. The proofs follow similar lines as for the standard LMS, with a slightly more involved algebra. The interested reader can obtain proofs by looking at the original papers as well as in [84].

The gradient descent scheme in (5.90), (5.91) is guaranteed to converge, meaning

${\mathbit{\theta }}_{k}^{\left(i\right)}\underset{i\to \infty }{\overset{}{\to }}{\mathbit{\theta }}_{*},$

provided that

${\mu }_{k}\le \frac{2}{{\lambda }_{max}\left\{{\Sigma }_{k}^{loc}\right\}},$

where

$\begin{array}{l}\hfill {\Sigma }_{k}^{loc}=\sum _{m\in {\mathcal{N}}_{k}}{c}_{mk}{\Sigma }_{{x}_{m}}.\end{array}$

## With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

No credit card required