For our experiment, we picked a predefined scorer function. For classification, there are five measures available (accuracy, AUC, precision, recall, and f1-score), and for regression, there are three (R2, MAE, and MSE). Though they are some of the most common measures, you may have to use a different measure. In our example, we find it useful to use a loss function in order to figure out if the right answer is still ranked high in probability, even when the classifier is wrong (thus considering if the right answer is the second or the third option of the algorithm). How do we manage that?
In the sklearn.metrics module, there's actually a log_loss function. All we have to do is wrap it in a way that GridSearchCV ...