Data Quality Fundamentals

Book description

Do your product dashboards look funky? Are your quarterly reports stale? Is the data set you're using broken or just plain wrong? These problems affect almost every team, yet they're usually addressed on an ad hoc basis and in a reactive manner. If you answered yes to these questions, this book is for you.

Many data engineering teams today face the "good pipelines, bad data" problem. It doesn't matter how advanced your data infrastructure is if the data you're piping is bad. In this book, Barr Moses, Lior Gavish, and Molly Vorwerck, from the data observability company Monte Carlo, explain how to tackle data quality and trust at scale by leveraging best practices and technologies used by some of the world's most innovative companies.

  • Build more trustworthy and reliable data pipelines
  • Write scripts to make data checks and identify broken pipelines with data observability
  • Learn how to set and maintain data SLAs, SLIs, and SLOs
  • Develop and lead data quality initiatives at your company
  • Learn how to treat data services and systems with the diligence of production software
  • Automate data lineage graphs across your data ecosystem
  • Build anomaly detectors for your critical data assets

Table of contents

  1. Preface
    1. Conventions Used in This Book
    2. Using Code Examples
    3. O’Reilly Online Learning
    4. How to Contact Us
    5. Acknowledgments
  2. 1. Why Data Quality Deserves Attention—Now
    1. What Is Data Quality?
    2. Framing the Current Moment
      1. Understanding the “Rise of Data Downtime”
      2. Other Industry Trends Contributing to the Current Moment
    3. Summary
  3. 2. Assembling the Building Blocks of a Reliable Data System
    1. Understanding the Difference Between Operational and Analytical Data
    2. What Makes Them Different?
    3. Data Warehouses Versus Data Lakes
      1. Data Warehouses: Table Types at the Schema Level
      2. Data Lakes: Manipulations at the File Level
      3. What About the Data Lakehouse?
      4. Syncing Data Between Warehouses and Lakes
    4. Collecting Data Quality Metrics
      1. What Are Data Quality Metrics?
      2. How to Pull Data Quality Metrics
      3. Using Query Logs to Understand Data Quality in the Warehouse
      4. Using Query Logs to Understand Data Quality in the Lake
    5. Designing a Data Catalog
    6. Building a Data Catalog
    7. Summary
  4. 3. Collecting, Cleaning, Transforming, and Testing Data
    1. Collecting Data
      1. Application Log Data
      2. API Responses
      3. Sensor Data
    2. Cleaning Data
    3. Batch Versus Stream Processing
    4. Data Quality for Stream Processing
    5. Normalizing Data
      1. Handling Heterogeneous Data Sources
      2. Schema Checking and Type Coercion
      3. Syntactic Versus Semantic Ambiguity in Data
      4. Managing Operational Data Transformations Across AWS Kinesis and Apache Kafka
    6. Running Analytical Data Transformations
      1. Ensuring Data Quality During ETL
      2. Ensuring Data Quality During Transformation
    7. Alerting and Testing
      1. dbt Unit Testing
      2. Great Expectations Unit Testing
      3. Deequ Unit Testing
    8. Managing Data Quality with Apache Airflow
      1. Scheduler SLAs
      2. Installing Circuit Breakers with Apache Airflow
      3. SQL Check Operators
    9. Summary
  5. 4. Monitoring and Anomaly Detection for Your Data Pipelines
    1. Knowing Your Known Unknowns and Unknown Unknowns
    2. Building an Anomaly Detection Algorithm
      1. Monitoring for Freshness
      2. Understanding Distribution
    3. Building Monitors for Schema and Lineage
      1. Anomaly Detection for Schema Changes and Lineage
      2. Visualizing Lineage
      3. Investigating a Data Anomaly
    4. Scaling Anomaly Detection with Python and Machine Learning
      1. Improving Data Monitoring Alerting with Machine Learning
      2. Accounting for False Positives and False Negatives
      3. Improving Precision and Recall
      4. Detecting Freshness Incidents with Data Monitoring
      5. F-Scores
      6. Does Model Accuracy Matter?
    5. Beyond the Surface: Other Useful Anomaly Detection Approaches
    6. Designing Data Quality Monitors for Warehouses Versus Lakes
    7. Summary
  6. 5. Architecting for Data Reliability
    1. Measuring and Maintaining High Data Reliability at Ingestion
    2. Measuring and Maintaining Data Quality in the Pipeline
    3. Understanding Data Quality Downstream
    4. Building Your Data Platform
      1. Data Ingestion
      2. Data Storage and Processing
      3. Data Transformation and Modeling
      4. Business Intelligence and Analytics
      5. Data Discovery and Governance
    5. Developing Trust in Your Data
      1. Data Observability
      2. Measuring the ROI on Data Quality
      3. How to Set SLAs, SLOs, and SLIs for Your Data
    6. Case Study: Blinkist
    7. Summary
  7. 6. Fixing Data Quality Issues at Scale
    1. Fixing Quality Issues in Software Development
    2. Data Incident Management
      1. Incident Detection
      2. Response
      3. Root Cause Analysis
      4. Resolution
      5. Blameless Postmortem
    3. Incident Response and Mitigation
      1. Establishing a Routine of Incident Management
      2. Why Data Incident Commanders Matter
    4. Case Study: Data Incident Management at PagerDuty
      1. The DataOps Landscape at PagerDuty
      2. Data Challenges at PagerDuty
      3. Using DevOps Best Practices to Scale Data Incident Management
    5. Summary
  8. 7. Building End-to-End Lineage
    1. Building End-to-End Field-Level Lineage for Modern Data Systems
      1. Basic Lineage Requirements
      2. Data Lineage Design
      3. Parsing the Data
      4. Building the User Interface
    2. Case Study: Architecting for Data Reliability at Fox
      1. Exercise “Controlled Freedom” When Dealing with Stakeholders
      2. Invest in a Decentralized Data Team
      3. Avoid Shiny New Toys in Favor of Problem-Solving Tech
      4. To Make Analytics Self-Serve, Invest in Data Trust
    3. Summary
  9. 8. Democratizing Data Quality
    1. Treating Your “Data” Like a Product
    2. Perspectives on Treating Data Like a Product
      1. Convoy Case Study: Data as a Service or Output
      2. Uber Case Study: The Rise of the Data Product Manager
      3. Applying the Data-as-a-Product Approach
    3. Building Trust in Your Data Platform
      1. Align Your Product’s Goals with the Goals of the Business
      2. Gain Feedback and Buy-in from the Right Stakeholders
      3. Prioritize Long-Term Growth and Sustainability Versus Short-Term Gains
      4. Sign Off on Baseline Metrics for Your Data and How You Measure Them
      5. Know When to Build Versus Buy
    4. Assigning Ownership for Data Quality
      1. Chief Data Officer
      2. Business Intelligence Analyst
      3. Analytics Engineer
      4. Data Scientist
      5. Data Governance Lead
      6. Data Engineer
      7. Data Product Manager
      8. Who Is Responsible for Data Reliability?
    5. Creating Accountability for Data Quality
    6. Balancing Data Accessibility with Trust
    7. Certifying Your Data
    8. Seven Steps to Implementing a Data Certification Program
    9. Case Study: Toast’s Journey to Finding the Right Structure for Their Data Team
      1. In the Beginning: When a Small Team Struggles to Meet Data Demands
      2. Supporting Hypergrowth as a Decentralized Data Operation
      3. Regrouping, Recentralizing, and Refocusing on Data Trust
      4. Considerations When Scaling Your Data Team
    10. Increasing Data Literacy
    11. Prioritizing Data Governance and Compliance
      1. Prioritizing a Data Catalog
      2. Beyond Catalogs: Enforcing Data Governance
    12. Building a Data Quality Strategy
      1. Make Leadership Accountable for Data Quality
      2. Set Data Quality KPIs
      3. Spearhead a Data Governance Program
      4. Automate Your Lineage and Data Governance Tooling
      5. Create a Communications Plan
    13. Summary
  10. 9. Data Quality in the Real World: Conversations and Case Studies
    1. Building a Data Mesh for Greater Data Quality
      1. Domain-Oriented Data Owners and Pipelines
      2. Self-Serve Functionality
      3. Interoperability and Standardization of Communications
    2. Why Implement a Data Mesh?
      1. To Mesh or Not to Mesh? That Is the Question
      2. Calculating Your Data Mesh Score
    3. A Conversation with Zhamak Dehghani: The Role of Data Quality Across the Data Mesh
      1. Can You Build a Data Mesh from a Single Solution?
      2. Is Data Mesh Another Word for Data Virtualization?
      3. Does Each Data Product Team Manage Their Own Separate Data Stores?
      4. Is a Self-Serve Data Platform the Same Thing as a Decentralized Data Mesh?
      5. Is the Data Mesh Right for All Data Teams?
      6. Does One Person on Your Team “Own” the Data Mesh?
      7. Does the Data Mesh Cause Friction Between Data Engineers and Data Analysts?
    4. Case Study: Kolibri Games’ Data Stack Journey
      1. First Data Needs
      2. Pursuing Performance Marketing
      3. 2018: Professionalize and Centralize
      4. Getting Data-Oriented
      5. Getting Data-Driven
      6. Building a Data Mesh
      7. Five Key Takeaways from a Five-Year Data Evolution
    5. Making Metadata Work for the Business
    6. Unlocking the Value of Metadata with Data Discovery
      1. Data Warehouse and Lake Considerations
      2. Data Catalogs Can Drown in a Data Lake—or Even a Data Mesh
      3. Moving from Traditional Data Catalogs to Modern Data Discovery
    7. Deciding When to Get Started with Data Quality at Your Company
      1. You’ve Recently Migrated to the Cloud
      2. Your Data Stack Is Scaling with More Data Sources, More Tables, and More Complexity
      3. Your Data Team Is Growing
      4. Your Team Is Spending at Least 30% of Their Time Firefighting Data Quality Issues
      5. Your Team Has More Data Consumers Than They Did One Year Ago
      6. Your Company Is Moving to a Self-Service Analytics Model
      7. Data Is a Key Part of the Customer Value Proposition
      8. Data Quality Starts with Trust
    8. Summary
  11. 10. Pioneering the Future of Reliable Data Systems
    1. Be Proactive, Not Reactive
    2. Predictions for the Future of Data Quality and Reliability
      1. Data Warehouses and Lakes Will Merge
      2. Emergence of New Roles on the Data Team
      3. Rise of Automation
      4. More Distributed Environments and the Rise of Data Domains
    3. So Where Do We Go from Here?
  12. Index
  13. About the Authors

Product information

  • Title: Data Quality Fundamentals
  • Author(s): Barr Moses, Lior Gavish, Molly Vorwerck
  • Release date: September 2022
  • Publisher(s): O'Reilly Media, Inc.
  • ISBN: 9781098112042