Let's say that we have a model that is supposed to predict some continuous value, like a stock price. Suppose that we have accumulated some predicted values that we can compare to actual observed values:
observation,prediction 22.1,17.9 10.4,9.1 9.3,7.8 18.5,14.2 12.9,15.6 7.2,7.4 11.8,9.7...
Now, how do we measure the performance of this model? Well, the first step would be taking the difference between the observed and predicted values to get an error:
observation,prediction,error 22.1,17.9,4.2 10.4,9.1,1.3 9.3,7.8,1.5 18.5,14.2,4.3 12.9,15.6,-2.7 7.2,7.4,-0.2 11.8,9.7,2.1...
The error gives us a general idea of how far off we were from the value that we were supposed to predict. However, it's not really feasible or practical ...