Chapter 8. Feature Selection

We use feature selection to select features that are useful to the model. Irrelevant features may have a negative effect on a model. Correlated features can make coefficients in regression (or feature importance in tree models) unstable or difficult to interpret.

The curse of dimensionality is another issue to consider. As you increase the number of dimensions of your data, it becomes more sparse. This can make it difficult to pull out a signal unless you have more data. Neighbor calculations tend to lose their usefulness as more dimensions are added.

Also, training time is usually a function of the number of columns (and sometimes it is worse than linear). If you can be concise and precise with your columns, you can have a better model in less time. We will walk through some examples using the agg_df dataset from the last chapter. Remember that this is the Titanic dataset with some extra columns for cabin information. Because this dataset is aggregating numeric values for each cabin, it will show many correlations. Other options include PCA and looking at the .feature_importances_ of a tree classifier.

Collinear Columns

We can use the previously defined correlated_columns function or run the following code to find columns that have a correlation coefficient of .95 or above:

>>> limit = 0.95
>>> corr = agg_df.corr()
>>> mask = np.triu(
...     np.ones(corr.shape), k=1
... ).astype(bool)
>>> corr_no_diag = corr.where(mask)
>>> coll = [
...     c
...     for ...

Get Machine Learning Pocket Reference now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.