Feature importances

Decision trees recursively split the data, making decisions on which features to use for each split. They are greedy learners, meaning they look for the largest split they can make each time; this isn't necessarily the optimal split when looking at the output of the tree. We can use a decision tree to gauge feature importances, which determine how the tree splits the data at the decision nodes. These feature importances can help inform feature selection. Note that feature importances will sum to 1, and higher values are better. Let's use a decision tree to see how red and white wine can be separated on a chemical level:

>>> from sklearn.tree import DecisionTreeClassifier>>> dt = DecisionTreeClassifier(random_state=0).fit( ...

Get Hands-On Data Analysis with Pandas now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.