5.13.3 Convergence and Steady-State Performance: Some Highlights

In this subsection, we will summarize some findings concerning the performance analysis of the DiLMS. We will not give proofs. The proofs follow similar lines as for the standard LMS, with a slightly more involved algebra. The interested reader can obtain proofs by looking at the original papers as well as in [84].

 The gradient descent scheme in (5.90), (5.91) is guaranteed to converge, meaning

θk(i)iθ*,

si282_e

provided that

μk2λmax{Σkloc},

si283_e

where

Σkloc=mNkcmkΣxm.

  

Get Machine Learning now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.