As we mentioned before, to obtain the final predictions from the predicted probabilities we get from the model, we need a classification threshold. Therefore, if we change the threshold we will get a different classifier, a different confusion matrix, and, of course, different classification metrics. Let's see what happens if we use a threshold of 0.4:
threshold = 0.4y_pred_prob = rf.predict_proba(X_test)[:,1]y_pred = (y_pred_prob > threshold).astype(int)precision = precision_score(y_test, y_pred)recall = recall_score(y_test, y_pred)print("Precision: {:0.1f}%, Recall: {:.1f}%".format(100*precision, 100*recall))CM(y_test, y_pred)
We get the following results:
For making the ...