Now that we have a model, let's take a look at how to train a recurrent neural network in CNTK.
First, we need to define what loss function we want to optimize. Since we're predicting a continuous variable—power output—we need to use a mean squared error loss. We'll combine the loss with a mean square error metric to measure the performance of our model. Remember, from Chapter 4, Validating Model Performance, that we can combine the loss and metric in a single function object using @Function:
@Functiondef criterion_factory(z, t): loss = squared_error(z, t) metric = squared_error(z, t) return loss, metricloss = criterion_factory(model, target)learner = adam(model.parameters, lr=0.005, momentum=0.9) ...