The output of each layer on the network is dependent on the parameters of the model estimated by training the neural network to minimize the loss with respect to the weights, , as we described earlier. This is a general principle in machine learning, in which a learning procedure, for example backpropagation, uses the gradients of the error of a model to update its parameters to minimize the error. Consider estimating the parameters of a linear regression model such that the output of the model minimizes the mean squared error (MSE). Mathematically speaking, the point-wise error between the predictions and the target ...
The parameter estimation
Get Hands-On Generative Adversarial Networks with Keras now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.