### 5.13.3 Convergence and Steady-State Performance: Some Highlights

In this subsection, we will summarize some findings concerning the performance analysis of the DiLMS. We will not give proofs. The proofs follow similar lines as for the standard LMS, with a slightly more involved algebra. The interested reader can obtain proofs by looking at the original papers as well as in [84].

The gradient descent scheme in (5.90), (5.91) is guaranteed to converge, meaning

${\mathbit{\theta }}_{k}^{\left(i\right)}\underset{i\to \infty }{\overset{}{\to }}{\mathbit{\theta }}_{*},$

provided that

${\mu }_{k}\le \frac{2}{{\lambda }_{max}\left\{{\Sigma }_{k}^{loc}\right\}},$

where

$\begin{array}{l}\hfill {\Sigma }_{k}^{loc}=\sum _{m\in {\mathcal{N}}_{k}}{c}_{mk}{\Sigma }_{{x}_{m}}.\end{array}$

Get Machine Learning now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.