We can begin to improve our model with a new function that loops through several optimization calls until we reach the minimum, at which point the model will be fully optimized. Here's what it looks like:
# Python implementationdef train(results, weight, bias, xs, learningRate, iterations): error = 0 for i in range(iterations): weight, bias = optimizeWeightBias(results, weight, bias, xs, learningRate) error = cost(results, weight, bias, xs) print("Iteration: {}, weight: {:.4f}, bias: {:.4f}, error: {:.2}".format(i, weight, bias, error)) return weight, bias
The Solidity implementation looks pretty similar, although we have to make sure that the results and independent variables, values have the same length to avoid errors, ...