Sometimes, rather than looking to create more features, we look for ways to consolidate them and reduce the dimensionality of our data. Dimensionality reduction shrinks the number of features we train our model on. This is done to reduce the computational complexity of training the model without sacrificing much performance. We could just choose to train on a subset of the features (feature selection); however, if we think there is value in those features, albeit small, we may look for ways to extract the information we need from them.
One common strategy is to discard features with low variance. These features aren't very informative since they are mostly the same value throughout the data. Scikit-learn provides ...