Appendix. Recommended Preprocessing

The type of preprocessing needed depends on the type of model being fit. For example, models that use distance functions or dot products should have all of their predictors on the same scale so that distance is measured appropriately.

You can learn more about each of these models, and others that might be available, at the tinymodels website.

This Appendix provides recommendations for baseline levels of preprocessing that are needed for various model functions. In Table A-1, the preprocessing methods are categorized as:

Dummy

Do qualitative predictors require a numeric encoding (e.g., via dummy variables or other methods)?

ZV

Should columns with a single unique (i.e., zero variance) value be removed?

Impute

If some predictors are missing, should they be estimated via imputation?

Decorrelate

If there are correlated predictors, should this correlation be mitigated? This might mean filtering out predictors, using principal component analysis, or a model-based technique (e.g., regularization).

Normalize

Should predictors be centered and scaled?

Transform

Is it helpful to transform predictors to be more symmetric?

The information in Table A-1 is not exhaustive and somewhat depends on the implementation. For example, as noted in the table’s footnotes, some models may not require a particular preprocessing operation but the implementation may require it. In the table, ✓ indicates that the method is required for the model and × ...

Get Tidy Modeling with R now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.