Chapter 10. Training Pipelines
Model training is the broadest and deepest area of data science. We will cover the most important concepts and scalability challenges involved when training the full gamut of models, from decision trees with XGBoost, to deep learning at scale with Ray, to fine-tuning LLMs with low-rank adaptation (LoRA). There are many resources available to go into further depth on these topics. What we will focus on is mastering the yin and yang of model training:
- Model-centric AI
The iterative process of improving model performance by experimenting with model architecture and tuning hyperparameters
- Data-centric AI
The iterative process of selecting features and data to improve model performance
To become a great data scientist, you need to be good at both model-centric and data-centric training. With our yin and yang philosophy, we will cover the most important practical elements of training pipelines: choice of learning algorithm, connecting labels to features in a feature store, feature selection, training dataset creation, model architecture, distributed training, and model evaluation. We will also look at performance challenges for scaling model training on GPUs.
Unstructured Data and Labels in Feature Groups
In the MVPS development methodology from Chapter 2, you start by identifying the prediction problem and the data sources available to solve that problem. Prediction problems can be divided into three groups: supervised learning that requires explicit ...