9Neural Net Learning and Loss Bounds Analysis

In this chapter, neural learning loss bounds derivations are given for the gradient descent learning process (GD) as well as other learning processing indicated in the link formalism. The link formalism provides a unified variational formalism for learning or “information dynamics” (Section 9.2.1). To start, however, a detailed, but brief, description of neural nets will be given starting with the single neuron (Section 9.1.1). Section 9.1.2 describes neural nets and back‐propagation (which is then largely explored in the TensorFlow implementation described in Chapter 13). Section 9.3 describes the motivation for the link function f(ω) = sinh − 1(ω). Section 9.4 provides a loss bounds analysis for the link function f(ω) = sinh−1(ω).

In Machine Learning we have identified good families of “update rules” that can perform neural net learning. In examination of update rules that are “good” it is possible in some cases to explicitly derive the error rate or loss during learning and quantify loss‐bounds on the overall learning process. In doing so there appears to be a learning phenomenology “regularization” for the better algorithms akin to the Newtonian Physics phenomenology of inertia in predicting motion. The Loss Bounds analysis reveals that statistical learning is feasible in these domains of application, i.e., they are effectively a proof of the feasibility of “statistical learning”. However, the description and results also indicate ...

Get Informatics and Machine Learning now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.