July 2019
Beginner to intermediate
740 pages
16h 52m
English
When trying out different models for classification, it may be interesting to measure their agreement using Cohen's kappa score. We can determine the agreement between models with the cohen_kappa_score() function in the metrics module. The score ranges from complete disagreement (-1) to complete agreement (1). Our boosting and bagging predictions agree nearly 72% of the time:
>>> from sklearn.metrics import cohen_kappa_score>>> cohen_kappa_score(... rf_grid.predict(r_X_test), gb_grid.predict(r_X_test)... )0.7185929648241206
Sometimes, we can't find a single classification model that works well for all of our data, so we may want to find a way to combine the opinions of various models to make the final decision. Scikit-learn provides ...
Read now
Unlock full access