In this last section, we would like to compare several recipes and see if our SQL, based feature engineering drives a better model performance. In all our experimentation, the one thing that stood out with regards to the recipes Amazon ML suggested was that all the numeric variables ended up being categorized via quantile binning. The large number of bins was also in question. We compare the following scenarios on the Titanic dataset:
- Suggested Amazon ML recipe
- Numeric values are kept as numeric. No quantile binning is involved in the recipe.
- The extended Titanic datasource we created in Chapter 4, Loading and Preparing the Dataset is used with the suggested Amazon ML recipe
We slightly ...