Comparing models with k-fold cross-validation

As k-fold cross-validation method proved to be a better method, it is more suitable for comparing models. The reason behind this is that k-fold cross-validation gives much estimation of the evaluation metrics, and on averaging these estimations, we get a better assessment of model performance.

The following shows the code used to import libraries for comparing models:

import numpy as npimport matplotlib.pyplot as pltimport pandas as pd%matplotlib inline

After importing libraries, we'll import the diamond dataset. The following shows the code used to prepare this diamond dataset:

# importing datadata_path= '../data/diamonds.csv'diamonds = pd.read_csv(data_path)diamonds = pd.concat([diamonds, pd.get_dummies(diamonds['cut'], ...

Get Mastering Predictive Analytics with scikit-learn and TensorFlow now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.