In this subsection, we will summarize some findings concerning the performance analysis of the DiLMS. We will not give proofs. The proofs follow similar lines as for the standard LMS, with a slightly more involved algebra. The interested reader can obtain proofs by looking at the original papers as well as in [84].

• The gradient descent scheme in (5.90), (5.91) is guaranteed to converge, meaning

${\mathit{\theta}}_{k}^{(i)}\underset{i\to \infty}{\overset{}{\to}}{\mathit{\theta}}_{*},$

provided that

${\mu}_{k}\le \frac{2}{{\lambda}_{max}\{{\Sigma}_{k}^{loc}\}},$

where

$\begin{array}{l}\hfill {\Sigma}_{k}^{loc}=\sum _{m\in {\mathcal{N}}_{k}}{c}_{mk}{\Sigma}_{{x}_{m}}.\end{array}$

Start Free Trial

No credit card required