When fitting decision trees, the tree starts at the root node and greedily chooses the optimal split at every junction that optimizes a certain metric of node purity. By default, scikit-learn optimizes for the gini metric at every step. While each split is created, the model keeps track of how much each split helps the overall optimization goal. In doing so, tree-based models that choose splits based on such metrics have a notion of feature importance.
To illustrate this further, let's go ahead and fit a decision tree to our data and output the feature importance' with the help of the following code:
# create a brand new decision tree classifiertree = DecisionTreeClassifier()tree.fit(X, y)
Once our ...