Arriving at optimal network weights 

The time series or observations are fed to DeepAR as part of the training. At each time step, current covariates, previous observations, and previous network output are used. The model uses Back Propagation Through Time (BPTT) to compute gradient descent after each iteration. In particular, the Adam optimizer is used to conduct BPTT. Through the stochastic gradient descent algorithm, Adam, we arrive at optimal network weights via back propagation.

At each time step, t, the inputs to the network are covariates, ; the target at the previous time step, ; as well as the previous network output, . The network ...

Get Hands-On Artificial Intelligence on Amazon Web Services now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.