Learning in MLPs

The multi-layer perceptron network learns based on the Delta Rule, which is also inspired by the gradient descent optimization method. The gradient method is broadly applied to find minima or maxima of a given function:

Learning in MLPs

This method is applied at walking the direction where the function's output is higher or lower, depending on the criteria. This concept is explored in the Delta Rule:

Learning in MLPs

The function that the Delta Rule wants to minimize is the error between the neural network output and the target output, and the parameters to be found ...

Get Deep Learning: Practical Neural Networks with Java now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.