In Chapters 4 through 8, we introduced some of the most common supervised machine learning approaches. For each of the techniques, we started by explaining the basic principles behind them, and then we illustrated how to build a model with them in R. For the regression examples, we used several measures to evaluate how well our model fit the observed data. This is known as goodness-of-fit. For the classification examples, we used a simple metric, predictive accuracy, to evaluate the performance of our models. Predictive accuracy is easy to calculate—you simply divide the number of correct predictions by the number of total predictions. However, it does not always provide a complete picture of the estimated future performance of a model.
In this chapter, we discuss some of the limitations of predictive accuracy and introduce some other metrics that provide additional perspectives on model performance. Before we do so, we explore some of the different ways in which we can partition our data in order to get the best estimate of future performance from a given model or set of models.
By the end of this chapter, you will have learned the following:
- The different approaches to resampling as a means to estimate the future performance of a model
- The pros and cons of the different resampling techniques
- How to evaluate model performance with metrics other than accuracy
- How to visualize model performance
ESTIMATING FUTURE PERFORMANCE
During the model ...