In this chapter, we've seen the concept of entropy and information gain. We've learned to create a Decision Tree with these concepts. After this, we've used Rattle to create a model to predict credit risk. We've translated our tree to rules, and seen how to code them in Qlik Sense.
After Decision Trees, we saw how ensemble models combine a set of learners to create a better model. We've focused on two ensemble models: Random Forest and Boosting.
Then, we've introduced Supported Vector Machines, and finally, we've covered other methods such as Regression and Neural Networks.
During this entire chapter, we didn't worry about the model performance, we just created the models. However, we avoided looking at the prediction accuracy of all these ...