Introducing ScikitLearn
There are several Python libraries which provide solid implementations of a range of machine learning algorithms. One of the best known is ScikitLearn, a package that provides efficient versions of a large number of common algorithms. ScikitLearn is characterized by a clean, uniform, and streamlined API, as well as by very useful and complete online documentation. A benefit of this uniformity is that once you understand the basic use and syntax of ScikitLearn for one type of model, switching to a new model or algorithm is very straightforward.
This section provides an overview of the ScikitLearn API; a solid understanding of these API elements will form the foundation for understanding the deeper practical discussion of machine learning algorithms and approaches in the following chapters.
We will start by covering data representation in ScikitLearn, followed by covering the Estimator API, and finally go through a more interesting example of using these tools for exploring a set of images of handwritten digits.
Data Representation in ScikitLearn
Machine learning is about creating models from data: for that reason, we'll start by discussing how data can be represented in order to be understood by the computer. The best way to think about data within ScikitLearn is in terms of tables of data.
Data as table
A basic table is a twodimensional grid of data, in which the rows represent individual elements of the dataset, and the columns represent quantities related to each of these elements.
For example, consider the Iris dataset, famously analyzed by Ronald Fisher in 1936.
We can download this dataset in the form of a Pandas DataFrame
using the seaborn library:
import seaborn as sns iris = sns.load_dataset('iris') iris.head()
sepal_length  sepal_width  petal_length  petal_width  species  

0  5.1  3.5  1.4  0.2  setosa 
1  4.9  3.0  1.4  0.2  setosa 
2  4.7  3.2  1.3  0.2  setosa 
3  4.6  3.1  1.5  0.2  setosa 
4  5.0  3.6  1.4  0.2  setosa 
Here each row of the data refers to a single observed flower, and the number of rows is the total number of flowers in the dataset.
In general, we will refer to the rows of the matrix as samples, and the number of rows as n_samples
.
Likewise, each column of the data refers to a particular quantitative piece of information that describes each sample.
In general, we will refer to the columns of the matrix as features, and the number of columns as n_features
.
Features matrix
This table layout makes clear that the information can be thought of as a twodimensional numerical array or matrix, which we will call the features matrix.
By convention, this features matrix is often stored in a variable named X
.
The features matrix is assumed to be twodimensional, with shape [n_samples, n_features]
, and is most often contained in a NumPy array or a Pandas DataFrame
, though some ScikitLearn models also accept SciPy sparse matrices.
The samples (i.e., rows) always refer to the individual objects described by the dataset. For example, the sample might be a flower, a person, a document, an image, a sound file, a video, an astronomical object, or anything else you can describe with a set of quantitative measurements.
The features (i.e., columns) always refer to the distinct observations that describe each sample in a quantitative manner. Features are generally realvalued, but may be Boolean or discretevalued in some cases.
Target array
In addition to the feature matrix X
, we also generally work with a label or target array, which by convention we will usually call y
.
The target array is usually one dimensional, with length n_samples
, and is generally contained in a NumPy array or Pandas Series
.
The target array may have continuous numerical values, or discrete classes/labels.
While some ScikitLearn estimators do handle multiple target values in the form of a twodimensional, [n_samples, n_targets]
target array, we will primarily be working with the common case of a onedimensional target array.
Often one point of confusion is how the target array differs from the other features columns. The distinguishing feature of the target array is that it is usually the quantity we want to predict from the data: in statistical terms, it is the dependent variable.
For example, in the preceding data we may wish to construct a model that can predict the species of flower based on the other measurements; in this case, the species
column would be considered the feature.
With this target array in mind, we can use Seaborn (see Visualization With Seaborn) to conveniently visualize the data:
%matplotlib inline import seaborn as sns; sns.set() sns.pairplot(iris, hue='species', size=1.5);
For use in ScikitLearn, we will extract the features matrix and target array from the DataFrame
, which we can do using some of the Pandas DataFrame
operations discussed in the Chapter 3:
X_iris = iris.drop('species', axis=1) X_iris.shape
(150, 4)
y_iris = iris['species'] y_iris.shape
(150,)
To summarize, the expected layout of features and target values is visualized in the following diagram:
With this data properly formatted, we can move on to consider the estimator API of ScikitLearn:
ScikitLearn's Estimator API
The ScikitLearn API is designed with the following guiding principles in mind, as outlined in the ScikitLearn API paper:

Consistency: All objects share a common interface drawn from a limited set of methods, with consistent documentation.

Inspection: All specified parameter values are exposed as public attributes.

Limited object hierarchy: Only algorithms are represented by Python classes; datasets are represented in standard formats (NumPy arrays, Pandas
DataFrame
s, SciPy sparse matrices) and parameter names use standard Python strings. 
Composition: Many machine learning tasks can be expressed as sequences of more fundamental algorithms, and ScikitLearn makes use of this wherever possible.

Sensible defaults: When models require userspecified parameters, the library defines an appropriate default value.
In practice, these principles make ScikitLearn very easy to use, once the basic principles are understood. Every machine learning algorithm in ScikitLearn is implemented via the Estimator API, which provides a consistent interface for a wide range of machine learning applications.
Basics of the API
Most commonly, the steps in using the ScikitLearn estimator API are as follows (we will step through a handful of detailed examples in the sections that follow).
 Choose a class of model by importing the appropriate estimator class from ScikitLearn.
 Choose model hyperparameters by instantiating this class with desired values.
 Arrange data into a features matrix and target vector following the discussion above.
 Fit the model to your data by calling the
fit()
method of the model instance.  Apply the Model to new data:
 For supervised learning, often we predict labels for unknown data using the
predict()
method.  For unsupervised learning, we often transform or infer properties of the data using the
transform()
orpredict()
method.
 For supervised learning, often we predict labels for unknown data using the
We will now step through several simple examples of applying supervised and unsupervised learning methods.
Supervised learning example: Simple linear regression
As an example of this process, let's consider a simple linear regression—that is, the common case of fitting a line to $(x, y)$ data. We will use the following simple data for our regression example:
import matplotlib.pyplot as plt import numpy as np rng = np.random.RandomState(42) x = 10 * rng.rand(50) y = 2 * x  1 + rng.randn(50) plt.scatter(x, y);
With this data in place, we can use the recipe outlined earlier. Let's walk through the process:
1. Choose a class of model
In ScikitLearn, every class of model is represented by a Python class. So, for example, if we would like to compute a simple linear regression model, we can import the linear regression class:
from sklearn.linear_model import LinearRegression
Note that other more general linear regression models exist as well; you can read more about them in the sklearn.linear_model
module documentation.
2. Choose model hyperparameters
An important point is that a class of model is not the same as an instance of a model.
Once we have decided on our model class, there are still some options open to us. Depending on the model class we are working with, we might need to answer one or more questions like the following:
 Would we like to fit for the offset (i.e., yintercept)?
 Would we like the model to be normalized?
 Would we like to preprocess our features to add model flexibility?
 What degree of regularization would we like to use in our model?
 How many model components would we like to use?
These are examples of the important choices that must be made once the model class is selected. These choices are often represented as hyperparameters, or parameters that must be set before the model is fit to data. In ScikitLearn, hyperparameters are chosen by passing values at model instantiation. We will explore how you can quantitatively motivate the choice of hyperparameters in Hyperparameters and Model Validation.
For our linear regression example, we can instantiate the LinearRegression
class and specify that we would like to fit the intercept using the fit_intercept
hyperparameter:
model = LinearRegression(fit_intercept=True) model
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)
Keep in mind that when the model is instantiated, the only action is the storing of these hyperparameter values. In particular, we have not yet applied the model to any data: the ScikitLearn API makes very clear the distinction between choice of model and application of model to data.
3. Arrange data into a features matrix and target vector
Previously we detailed the ScikitLearn data representation, which requires a twodimensional features matrix and a onedimensional target array.
Here our target variable y
is already in the correct form (a lengthn_samples
array), but we need to massage the data x
to make it a matrix of size [n_samples, n_features]
.
In this case, this amounts to a simple reshaping of the onedimensional array:
X = x[:, np.newaxis] X.shape
(50, 1)
4. Fit the model to your data
Now it is time to apply our model to data.
This can be done with the fit()
method of the model:
model.fit(X, y)
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)
This fit()
command causes a number of modeldependent internal computations to take place, and the results of these computations are stored in modelspecific attributes that the user can explore.
In ScikitLearn, by convention all model parameters that were learned during the fit()
process have trailing underscores; for example in this linear model, we have the following:
model.coef_
array([ 1.9776566])
model.intercept_
0.90331072553111635
These two parameters represent the slope and intercept of the simple linear fit to the data. Comparing to the data definition, we see that they are very close to the input slope of 2 and intercept of 1.
One question that frequently comes up regards the uncertainty in such internal model parameters. In general, ScikitLearn does not provide tools to draw conclusions from internal model parameters themselves: interpreting model parameters is much more a statistical modeling question than a machine learning question. Machine learning rather focuses on what the model predicts. If you would like to dive into the meaning of fit parameters within the model, other tools are available, including the Statsmodels Python package.
5. Predict labels for unknown data
Once the model is trained, the main task of supervised machine learning is to evaluate it based on what it says about new data that was not part of the training set.
In ScikitLearn, this can be done using the predict()
method.
For the sake of this example, our "new data" will be a grid of x values, and we will ask what y values the model predicts:
xfit = np.linspace(1, 11)
As before, we need to coerce these x values into a [n_samples, n_features]
features matrix, after which we can feed it to the model:
Xfit = xfit[:, np.newaxis] yfit = model.predict(Xfit)
Finally, let's visualize the results by plotting first the raw data, and then this model fit:
plt.scatter(x, y) plt.plot(xfit, yfit);
Typically the efficiacy of the model is evaluated by comparing its results to some known baseline, as we will see in the next example
Supervised learning example: Iris classification
Let's take a look at another example of this process, using the Iris dataset we discussed earlier. Our question will be this: given a model trained on a portion of the Iris data, how well can we predict the remaining labels?
For this task, we will use an extremely simple generative model known as Gaussian naive Bayes, which proceeds by assuming each class is drawn from an axisaligned Gaussian distribution (see In Depth: Naive Bayes Classification for more details). Because it is so fast and has no hyperparameters to choose, Gaussian naive Bayes is often a good model to use as a baseline classification, before exploring whether improvements can be found through more sophisticated models.
We would like to evaluate the model on data it has not seen before, and so we will split the data into a training set and a testing set.
This could be done by hand, but it is more convenient to use the train_test_split
utility function:
from sklearn.cross_validation import train_test_split Xtrain, Xtest, ytrain, ytest = train_test_split(X_iris, y_iris, random_state=1)
With the data arranged, we can follow our recipe to predict the labels:
from sklearn.naive_bayes import GaussianNB # 1. choose model class model = GaussianNB() # 2. instantiate model model.fit(Xtrain, ytrain) # 3. fit model to data y_model = model.predict(Xtest) # 4. predict on new data
Finally, we can use the accuracy_score
utility to see the fraction of predicted labels that match their true value:
from sklearn.metrics import accuracy_score accuracy_score(ytest, y_model)
0.97368421052631582
With an accuracy topping 97%, we see that even this very naive classification algorithm is effective for this particular dataset!
Unsupervised learning example: Iris dimensionality
As an example of an unsupervised learning problem, let's take a look at reducing the dimensionality of the Iris data so as to more easily visualize it. Recall that the Iris data is four dimensional: there are four features recorded for each sample.
The task of dimensionality reduction is to ask whether there is a suitable lowerdimensional representation that retains the essential features of the data. Often dimensionality reduction is used as an aid to visualizing data: after all, it is much easier to plot data in two dimensions than in four dimensions or higher!
Here we will use principal component analysis (PCA; see In Depth: Principal Component Analysis), which is a fast linear dimensionality reduction technique. We will ask the model to return two components—that is, a twodimensional representation of the data.
Following the sequence of steps outlined earlier, we have:
from sklearn.decomposition import PCA # 1. Choose the model class model = PCA(n_components=2) # 2. Instantiate the model with hyperparameters model.fit(X_iris) # 3. Fit to data. Notice y is not specified! X_2D = model.transform(X_iris) # 4. Transform the data to two dimensions
Now let's plot the results. A quick way to do this is to insert the results into the original Iris DataFrame
, and use Seaborn's lmplot
to show the results:
iris['PCA1'] = X_2D[:, 0] iris['PCA2'] = X_2D[:, 1] sns.lmplot("PCA1", "PCA2", hue='species', data=iris, fit_reg=False);
We see that in the twodimensional representation, the species are fairly well separated, even though the PCA algorithm had no knowledge of the species labels! This indicates to us that a relatively straightforward classification will probably be effective on the dataset, as we saw before.
Unsupervised learning: Iris clustering
Let's next look at applying clustering to the Iris data. A clustering algorithm attempts to find distinct groups of data without reference to any labels. Here we will use a powerful clustering method called a Gaussian mixture model (GMM), discussed in more detail in In Depth: Gaussian Mixture Models. A GMM attempts to model the data as a collection of Gaussian blobs.
We can fit the Gaussian mixture model as follows:
from sklearn.mixture import GMM # 1. Choose the model class model = GMM(n_components=3, covariance_type='full') # 2. Instantiate the model with hyperparameters model.fit(X_iris) # 3. Fit to data. Notice y is not specified! y_gmm = model.predict(X_iris) # 4. Determine cluster labels
As before, we will add the cluster label to the Iris DataFrame
and use Seaborn to plot the results:
iris['cluster'] = y_gmm sns.lmplot("PCA1", "PCA2", data=iris, hue='species', col='cluster', fit_reg=False);
By splitting the data by cluster number, we see exactly how well the GMM algorithm has recovered the underlying label: the setosa species is separated perfectly within cluster 0, while there remains a small amount of mixing between versicolor and virginica. This means that even without an expert to tell us the species labels of the individual flowers, the measurements of these flowers are distinct enough that we could automatically identify the presence of these different groups of species with a simple clustering algorithm! This sort of algorithm might further give experts in the field clues as to the relationship between the samples they are observing.
Application: Exploring Handwritten Digits
To demonstrate these principles on a more interesting problem, let's consider one piece of the optical character recognition problem: the identification of handwritten digits. In the wild, this problem involves both locating and identifying characters in an image. Here we'll take a shortcut and use ScikitLearn's set of preformatted digits, which is built into the library.
Loading and visualizing the digits data
We'll use ScikitLearn's data access interface and take a look at this data:
from sklearn.datasets import load_digits digits = load_digits() digits.images.shape
(1797, 8, 8)
The images data is a threedimensional array: 1,797 samples each consisting of an 8 × 8 grid of pixels. Let's visualize the first hundred of these:
import matplotlib.pyplot as plt fig, axes = plt.subplots(10, 10, figsize=(8, 8), subplot_kw={'xticks':[], 'yticks':[]}, gridspec_kw=dict(hspace=0.1, wspace=0.1)) for i, ax in enumerate(axes.flat): ax.imshow(digits.images[i], cmap='binary', interpolation='nearest') ax.text(0.05, 0.05, str(digits.target[i]), transform=ax.transAxes, color='green')
In order to work with this data within ScikitLearn, we need a twodimensional, [n_samples, n_features]
representation.
We can accomplish this by treating each pixel in the image as a feature: that is, by flattening out the pixel arrays so that we have a length64 array of pixel values representing each digit.
Additionally, we need the target array, which gives the previously determined label for each digit.
These two quantities are built into the digits dataset under the data
and target
attributes, respectively:
X = digits.data X.shape
(1797, 64)
y = digits.target y.shape
(1797,)
We see here that there are 1,797 samples and 64 features.
Unsupervised learning: Dimensionality reduction
We'd like to visualize our points within the 64dimensional parameter space, but it's difficult to effectively visualize points in such a highdimensional space. Instead we'll reduce the dimensions to 2, using an unsupervised method. Here, we'll make use of a manifold learning algorithm called Isomap (see InDepth: Manifold Learning), and transform the data to two dimensions:
from sklearn.manifold import Isomap iso = Isomap(n_components=2) iso.fit(digits.data) data_projected = iso.transform(digits.data) data_projected.shape
(1797, 2)
We see that the projected data is now twodimensional. Let's plot this data to see if we can learn anything from its structure:
plt.scatter(data_projected[:, 0], data_projected[:, 1], c=digits.target, edgecolor='none', alpha=0.5, cmap=plt.cm.get_cmap('spectral', 10)) plt.colorbar(label='digit label', ticks=range(10)) plt.clim(0.5, 9.5);
This plot gives us some good intuition into how well various numbers are separated in the larger 64dimensional space. For example, zeros (in black) and ones (in purple) have very little overlap in parameter space. Intuitively, this makes sense: a zero is empty in the middle of the image, while a one will generally have ink in the middle. On the other hand, there seems to be a more or less continuous spectrum between ones and fours: we can understand this by realizing that some people draw ones with "hats" on them, which cause them to look similar to fours.
Overall, however, the different groups appear to be fairly well separated in the parameter space: this tells us that even a very straightforward supervised classification algorithm should perform suitably on this data. Let's give it a try.
Classification on digits
Let's apply a classification algorithm to the digits. As with the Iris data previously, we will split the data into a training and testing set, and fit a Gaussian naive Bayes model:
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, random_state=0)
from sklearn.naive_bayes import GaussianNB model = GaussianNB() model.fit(Xtrain, ytrain) y_model = model.predict(Xtest)
Now that we have predicted our model, we can gauge its accuracy by comparing the true values of the test set to the predictions:
from sklearn.metrics import accuracy_score accuracy_score(ytest, y_model)
0.83333333333333337
With even this extremely simple model, we find about 80% accuracy for classification of the digits! However, this single number doesn't tell us where we've gone wrong—one nice way to do this is to use the confusion matrix, which we can compute with ScikitLearn and plot with Seaborn:
from sklearn.metrics import confusion_matrix mat = confusion_matrix(ytest, y_model) sns.heatmap(mat, square=True, annot=True, cbar=False) plt.xlabel('predicted value') plt.ylabel('true value');
This shows us where the mislabeled points tend to be: for example, a large number of twos here are misclassified as either ones or eights. Another way to gain intuition into the characteristics of the model is to plot the inputs again, with their predicted labels. We'll use green for correct labels, and red for incorrect labels:
fig, axes = plt.subplots(10, 10, figsize=(8, 8), subplot_kw={'xticks':[], 'yticks':[]}, gridspec_kw=dict(hspace=0.1, wspace=0.1)) test_images = Xtest.reshape(1, 8, 8) for i, ax in enumerate(axes.flat): ax.imshow(test_images[i], cmap='binary', interpolation='nearest') ax.text(0.05, 0.05, str(y_model[i]), transform=ax.transAxes, color='green' if (ytest[i] == y_model[i]) else 'red')
Examining this subset of the data, we can gain insight regarding where the algorithm might be not performing optimally. To go beyond our 80% classification rate, we might move to a more sophisticated algorithm such as support vector machines (see InDepth: Support Vector Machines), random forests (see InDepth: Decision Trees and Random Forests) or another classification approach.
Summary
In this section we have covered the essential features of the ScikitLearn data representation, and the estimator API. Regardless of the type of estimator, the same import/instantiate/fit/predict pattern holds. Armed with this information about the estimator API, you can explore the ScikitLearn documentation and begin trying out various models on your data.
In the next section, we will explore perhaps the most important topic in machine learning: how to select and validate your model.