Entropy is a measure of the homogeneity (or heterogeneity) of data. The more homogeneous the data, the more entropy it has. Please keep in mind that, to make a better classification decision, heterogeneous data is better.
For example, consider a dataset where 1,000 people were surveyed about whether they smoke or not. In the first case, let's say that 500 people said yes and 500 said no. In the second case, let's assume that 800 people said yes and 200 said no. In which case would the entropy be more?
Yes, you guessed right. It is the first one because it is more homogeneous or, in other words, the decisions are equally distributed. If a person had to guess whether a survey participant answered yes or no, without knowing the actual ...