Data scientists are often faced with a problem that requires an automated decision. Is an email an attempt at phishing? Is a customer likely to churn? Is the web user likely to click on an advertisement? These are all classification problems. Classification is perhaps the most important form of prediction: the goal is to predict whether a record is a 0 or a 1 (phishing/not-phishing, click/don’t click, churn/don’t churn), or in some cases, one of several categories (for example, Gmail’s filtering of your inbox into “primary,” “social,” “promotional,” or “forums”).
Often, we need more than a simple binary classification: we want to know the predicted probability that a case belongs to a class.
Rather than having a model simply assign a binary classification, most algorithms can return a probability score (propensity) of belonging to the class of interest. In fact, with logistic regression, the default output from R is on the log-odds scale, and this must be transformed to a propensity. A sliding cutoff can then be used to convert the propensity score to a decision. The general approach is as follows:
Establish a cutoff probability for the class of interest above which we consider a record as belonging to that class.
Estimate (with any model) the probability that a record belongs to the class of interest.
If that probability is above the cutoff probability, assign the new record to the class of interest.
The higher the cutoff, the fewer records predicted ...