Chapter 4. Data Validation
In Chapter 3, we discussed how we can ingest data from various sources into our pipeline. In this chapter, we now want to start consuming the data by validating it, as shown in Figure 4-1.
Figure 4-1. Data validation as part of ML pipelines
Data is the basis for every machine learning model, and the model’s usefulness and performance depend on the data used to train, validate, and analyze the model. As you can imagine, without robust data, we can’t build robust models. In colloquial terms, you might have heard the phrase: “garbage in, garbage out”—meaning that our models won’t perform if the underlying data isn’t curated and validated. This is the exact purpose of our first workflow step in our machine learning pipeline: data validation.
In this chapter, we first motivate the idea of data validation, and then we introduce you to a Python package from the TensorFlow Extended ecosystem called TensorFlow Data Validation (TFDV). We show how you can set up the package in your data science projects, walk you through the common use cases, and highlight some very useful workflows.
The data validation step checks that the data in your pipelines is what your feature engineering step expects. It assists you in comparing multiple datasets. It also highlights if your data changes over time, for example, if your training data is significantly different from the new ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access