Data Engineering with Python

Book description

Build, monitor, and manage real-time data pipelines to create data engineering infrastructure efficiently using open-source Apache projects

Key Features

  • Become well-versed in data architectures, data preparation, and data optimization skills with the help of practical examples
  • Design data models and learn how to extract, transform, and load (ETL) data using Python
  • Schedule, automate, and monitor complex data pipelines in production

Book Description

Data engineering provides the foundation for data science and analytics, and forms an important part of all businesses. This book will help you to explore various tools and methods that are used for understanding the data engineering process using Python.

The book will show you how to tackle challenges commonly faced in different aspects of data engineering. You'll start with an introduction to the basics of data engineering, along with the technologies and frameworks required to build data pipelines to work with large datasets. You'll learn how to transform and clean data and perform analytics to get the most out of your data. As you advance, you'll discover how to work with big data of varying complexity and production databases, and build data pipelines. Using real-world examples, you'll build architectures on which you'll learn how to deploy data pipelines.

By the end of this Python book, you'll have gained a clear understanding of data modeling techniques, and will be able to confidently build data engineering pipelines for tracking data, running quality checks, and making necessary changes in production.

What you will learn

  • Understand how data engineering supports data science workflows
  • Discover how to extract data from files and databases and then clean, transform, and enrich it
  • Configure processors for handling different file formats as well as both relational and NoSQL databases
  • Find out how to implement a data pipeline and dashboard to visualize results
  • Use staging and validation to check data before landing in the warehouse
  • Build real-time pipelines with staging areas that perform validation and handle failures
  • Get to grips with deploying pipelines in the production environment

Who this book is for

This book is for data analysts, ETL developers, and anyone looking to get started with or transition to the field of data engineering or refresh their knowledge of data engineering using Python. This book will also be useful for students planning to build a career in data engineering or IT professionals preparing for a transition. No previous knowledge of data engineering is required.

Publisher resources

Download Example Code

Table of contents

  1. Data Engineering with Python
  2. Why subscribe?
  3. Contributors
  4. About the author
  5. About the reviewers
  6. Packt is searching for authors like you
  7. Preface
    1. Who this book is for
    2. What this book covers
    3. To get the most out of this book
    4. Download the example code files
    5. Download the color images
    6. Conventions used
    7. Get in touch
    8. Reviews
  8. Section 1: Building Data Pipelines – Extract Transform, and Load
  9. Chapter 1: What is Data Engineering?
    1. What data engineers do
      1. Required skills and knowledge to be a data engineer
    2. Data engineering versus data science
    3. Data engineering tools
      1. Programming languages
      2. Databases
      3. Data processing engines
      4. Data pipelines
    4. Summary
  10. Chapter 2: Building Our Data Engineering Infrastructure
    1. Installing and configuring Apache NiFi
      1. A quick tour of NiFi
      2. PostgreSQL driver
    2. Installing and configuring Apache Airflow
    3. Installing and configuring Elasticsearch
    4. Installing and configuring Kibana
    5. Installing and configuring PostgreSQL
    6. Installing pgAdmin 4
      1. A tour of pgAdmin 4
    7. Summary
  11. Chapter 3: Reading and Writing Files
    1. Writing and reading files in Python
      1. Writing and reading CSVs
      2. Reading and writing CSVs using pandas DataFrames
      3. Writing JSON with Python
    2. Building data pipelines in Apache Airflow
    3. Handling files using NiFi processors
      1. Working with CSV in NiFi
      2. Working with JSON in NiFi
    4. Summary
  12. Chapter 4: Working with Databases
    1. Inserting and extracting relational data in Python
      1. Inserting data into PostgreSQL
    2. Inserting and extracting NoSQL database data in Python
      1. Installing Elasticsearch
      2. Inserting data into Elasticsearch
    3. Building data pipelines in Apache Airflow
      1. Setting up the Airflow boilerplate
      2. Running the DAG
    4. Handling databases with NiFi processors
      1. Extracting data from PostgreSQL
      2. Running the data pipeline
    5. Summary
  13. Chapter 5: Cleaning, Transforming, and Enriching Data
    1. Performing exploratory data analysis in Python
      1. Downloading the data
      2. Basic data exploration
    2. Handling common data issues using pandas
      1. Drop rows and columns
      2. Creating and modifying columns
      3. Enriching data
    3. Cleaning data using Airflow
    4. Summary
  14. Chapter 6: Building a 311 Data Pipeline
    1. Building the data pipeline
      1. Mapping a data type
      2. Triggering a pipeline
      3. Querying SeeClickFix
      4. Transforming the data for Elasticsearch
      5. Getting every page
      6. Backfilling data
    2. Building a Kibana dashboard
      1. Creating visualizations
      2. Creating a dashboard
    3. Summary
  15. Section 2:Deploying Data Pipelines in Production
  16. Chapter 7: Features of a Production Pipeline
    1. Staging and validating data
      1. Staging data
      2. Validating data with Great Expectations
    2. Building idempotent data pipelines
    3. Building atomic data pipelines
    4. Summary
  17. Chapter 8: Version Control with the NiFi Registry
    1. Installing and configuring the NiFi Registry
      1. Installing the NiFi Registry
      2. Configuring the NiFi Registry
    2. Using the Registry in NiFi
      1. Adding the Registry to NiFi
    3. Versioning your data pipelines
    4. Using git-persistence with the NiFi Registry
    5. Summary
  18. Chapter 9: Monitoring Data Pipelines
    1. Monitoring NiFi using the GUI
      1. Monitoring NiFi with the status bar
    2. Monitoring NiFi with processors
    3. Using Python with the NiFi REST API
    4. Summary
  19. Chapter 10: Deploying Data Pipelines
    1. Finalizing your data pipelines for production
      1. Backpressure
      2. Improving processor groups
    2. Using the NiFi variable registry
    3. Deploying your data pipelines
      1. Using the simplest strategy
      2. Using the middle strategy
      3. Using multiple registries
    4. Summary
  20. Chapter 11: Building a Production Data Pipeline
    1. Creating a test and production environment
      1. Creating the databases
      2. Populating a data lake
    2. Building a production data pipeline
      1. Reading the data lake
      2. Scanning the data lake
      3. Inserting the data into staging
      4. Querying the staging database
      5. Validating the staging data
      6. Insert Warehouse
    3. Deploying a data pipeline in production
    4. Summary
  21. Section 3:Beyond Batch – Building Real-Time Data Pipelines
  22. Chapter 12: Building a Kafka Cluster
    1. Creating ZooKeeper and Kafka clusters
      1. Downloading Kafka and setting up the environment
      2. Configuring ZooKeeper and Kafka
      3. Starting the ZooKeeper and Kafka clusters
    2. Testing the Kafka cluster
      1. Testing the cluster with messages
    3. Summary
  23. Chapter 13: Streaming Data with Apache Kafka
    1. Understanding logs
    2. Understanding how Kafka uses logs
      1. Topics
      2. Kafka producers and consumers
    3. Building data pipelines with Kafka and NiFi
      1. The Kafka producer
      2. The Kafka consumer
    4. Differentiating stream processing from batch processing
    5. Producing and consuming with Python
      1. Writing a Kafka producer in Python
      2. Writing a Kafka consumer in Python
    6. Summary
  24. Chapter 14: Data Processing with Apache Spark
    1. Installing and running Spark
    2. Installing and configuring PySpark
    3. Processing data with PySpark
      1. Spark for data engineering
    4. Summary
  25. Chapter 15: Real-Time Edge Data with MiNiFi, Kafka, and Spark
    1. Setting up MiNiFi
    2. Building a MiNiFi task in NiFi
    3. Summary
  26. Appendix
    1. Building a NiFi cluster
    2. The basics of NiFi clustering
    3. Building a NiFi cluster
    4. Building a distributed data pipeline
    5. Managing the distributed data pipeline
    6. Summary
  27. Other Books You May Enjoy
    1. Leave a review - let other readers know what you think

Product information

  • Title: Data Engineering with Python
  • Author(s): Paul Crickard
  • Release date: October 2020
  • Publisher(s): Packt Publishing
  • ISBN: 9781839214189