Data Engineering with Google Cloud Platform

Book description

Build and deploy your own data pipelines on GCP, make key architectural decisions, and gain the confidence to boost your career as a data engineer

Key Features

  • Understand data engineering concepts, the role of a data engineer, and the benefits of using GCP for building your solution
  • Learn how to use the various GCP products to ingest, consume, and transform data and orchestrate pipelines
  • Discover tips to prepare for and pass the Professional Data Engineer exam

Book Description

With this book, you'll understand how the highly scalable Google Cloud Platform (GCP) enables data engineers to create end-to-end data pipelines right from storing and processing data and workflow orchestration to presenting data through visualization dashboards.

Starting with a quick overview of the fundamental concepts of data engineering, you'll learn the various responsibilities of a data engineer and how GCP plays a vital role in fulfilling those responsibilities. As you progress through the chapters, you'll be able to leverage GCP products to build a sample data warehouse using Cloud Storage and BigQuery and a data lake using Dataproc. The book gradually takes you through operations such as data ingestion, data cleansing, transformation, and integrating data with other sources. You'll learn how to design IAM for data governance, deploy ML pipelines with the Vertex AI, leverage pre-built GCP models as a service, and visualize data with Google Data Studio to build compelling reports. Finally, you'll find tips on how to boost your career as a data engineer, take the Professional Data Engineer certification exam, and get ready to become an expert in data engineering with GCP.

By the end of this data engineering book, you'll have developed the skills to perform core data engineering tasks and build efficient ETL data pipelines with GCP.

What you will learn

  • Load data into BigQuery and materialize its output for downstream consumption
  • Build data pipeline orchestration using Cloud Composer
  • Develop Airflow jobs to orchestrate and automate a data warehouse
  • Build a Hadoop data lake, create ephemeral clusters, and run jobs on the Dataproc cluster
  • Leverage Pub/Sub for messaging and ingestion for event-driven systems
  • Use Dataflow to perform ETL on streaming data
  • Unlock the power of your data with Data Studio
  • Calculate the GCP cost estimation for your end-to-end data solutions

Who this book is for

This book is for data engineers, data analysts, and anyone looking to design and manage data processing pipelines using GCP. You'll find this book useful if you are preparing to take Google's Professional Data Engineer exam. Beginner-level understanding of data science, the Python programming language, and Linux commands is necessary. A basic understanding of data processing and cloud computing, in general, will help you make the most out of this book.

Table of contents

  1. Data Engineering with Google Cloud Platform
  2. Contributors
  3. About the author
  4. About the reviewer
  5. Preface
    1. Who this book is for
    2. What this book covers
    3. To get the most out of this book
    4. Download the example code files
    5. Download the color images
    6. Conventions used
    7. Get in touch
    8. Share Your Thoughts
  6. Section 1: Getting Started with Data Engineering with GCP
  7. Chapter 1: Fundamentals of Data Engineering
    1. Understanding the data life cycle
      1. Understanding the need for a data warehouse
    2. Knowing the roles of a data engineer before starting
      1. Data engineer versus data scientist
      2. The focus of data engineers
    3. Foundational concepts for data engineering
      1. ETL concept in data engineering 
      2. The difference between ETL and ELT
      3. What is NOT big data?
      4. A quick look at how big data technologies store data
      5. A quick look at how to process multiple files using MapReduce
    4. Summary
    5. Exercise
    6. See also
  8. Chapter 2: Big Data Capabilities on GCP
    1. Technical requirements
    2. Understanding what the cloud is
      1. The difference between the cloud and non-cloud era
      2. The on-demand nature of the cloud
    3. Getting started with Google Cloud Platform
      1. Introduction to the GCP console
      2. Practicing pinning services
      3. Creating your first GCP project
      4. Using GCP Cloud Shell
    4. A quick overview of GCP services for data engineering
      1. Understanding the GCP serverless service
      2. Service mapping and prioritization
      3. The concept of quotas on GCP services
      4. User account versus service account
    5. Summary
  9. Section 2: Building Solutions with GCP Components
  10. Chapter 3: Building a Data Warehouse in BigQuery
    1. Technical requirements
    2. Introduction to Google Cloud Storage and BigQuery
      1. BigQuery data location
    3. Introduction to the BigQuery console
      1. Creating a dataset in BigQuery using the console
      2. Loading a local CSV file into the BigQuery table
      3. Using public data in BigQuery
      4. Data types in BigQuery compared to other databases
      5. Timestamp data in BigQuery compared to other databases
    4. Preparing the prerequisites before developing our data warehouse
      1. Step 1: Access your Cloud shell
      2. Step 2: Check the current setup using the command line 
      3. Step 3: The gcloud init command
      4. Step 4: Download example data from Git
      5. Step 5: Upload data to GCS from Git
    5. Practicing developing a data warehouse
      1. Data warehouse in BigQuery – Requirements for scenario 1
      2. Steps and planning for handling scenario 1
      3. Data warehouse in BigQuery – Requirements for scenario 2
      4. Steps and planning for handling scenario 2
    6. Summary
    7. Exercise – Scenario 3
    8. See also
  11. Chapter 4: Building Orchestration for Batch Data Loading Using Cloud Composer
    1. Technical requirements
    2. Introduction to Cloud Composer
    3. Understanding the working of Airflow
      1. Provisioning Cloud Composer in a GCP project
    4. Exercise: Build data pipeline orchestration using Cloud Composer
      1. Level 1 DAG – Creating dummy workflows
      2. Level 2 DAG – Scheduling a pipeline from Cloud SQL to GCS and BigQuery datasets
      3. Level 3 DAG – Parameterized variables
      4. Level 4 DAG – Guaranteeing task idempotency in Cloud Composer
      5. Level 5 DAG – Handling late data using a sensor
    5. Summary
  12. Chapter 5: Building a Data Lake Using Dataproc
    1. Technical requirements
    2. Introduction to Dataproc
      1. A brief history of the data lake and Hadoop ecosystem
      2. A deeper look into Hadoop components
      3. How much Hadoop-related knowledge do you need on GCP?
      4. Introducing the Spark RDD and the DataFrame concept
      5. Introducing the data lake concept
      6. Hadoop and Dataproc positioning on GCP
    3. Exercise – Building a data lake on a Dataproc cluster
      1. Creating a Dataproc cluster on GCP
      2. Using Cloud Storage as an underlying Dataproc file system 
    4. Exercise: Creating and running jobs on a Dataproc cluster
      1. Preparing log data in GCS and HDFS
      2. Developing Spark ETL from HDFS to HDFS
      3. Developing Spark ETL from GCS to GCS
      4. Developing Spark ETL from GCS to BigQuery
    5. Understanding the concept of the ephemeral cluster
      1. Practicing using a workflow template on Dataproc
    6. Building an ephemeral cluster using Dataproc and Cloud Composer
    7. Summary 
  13. Chapter 6: Processing Streaming Data with Pub/Sub and Dataflow
    1. Technical requirements
    2. Processing streaming data
      1. Streaming data for data engineers
      2. Introduction to Pub/Sub
      3. Introduction to Dataflow
    3. Exercise – Publishing event streams to cloud Pub/Sub
      1. Creating a Pub/Sub topic
      2. Creating and running a Pub/Sub publisher using Python
      3. Creating a Pub/Sub subscription
    4. Exercise – Using Cloud Dataflow to stream data from Pub/Sub to GCS
      1. Creating a HelloWorld application using Apache Beam
      2. Creating a Dataflow streaming job without aggregation
      3. Creating a streaming job with aggregation
    5. Summary
  14. Chapter 7: Visualizing Data for Making Data-Driven Decisions with Data Studio
    1. Technical requirements
    2. Unlocking the power of your data with Data Studio
    3. From data to metrics in minutes with an illustrative use case
      1. Understanding what BigQuery INFORMATION_SCHEMA is
      2. Exercise – Exploring the BigQuery INFORMATION_SCHEMA table using Data Studio
      3. Exercise – Creating a Data Studio report using data from a bike-sharing data warehouse
    4. Understanding how Data Studio can impact the cost of BigQuery
      1. What kind of table could be 1 TB in size?
      2. How can a table be accessed 10,000 times in a month?
    5. How to create materialized views and understanding how BI Engine works
      1. Understanding BI Engine
    6. Summary
  15. Chapter 8: Building Machine Learning Solutions on Google Cloud Platform
    1. Technical requirements
    2. A quick look at machine learning
    3. Exercise – practicing ML code using Python
      1. Preparing the ML dataset by using a table from the BigQuery public dataset
      2. Training the ML model using Random Forest in Python
      3. Creating Batch Prediction using the training dataset's output
    4. The MLOps landscape in GCP
      1. Understanding the basic principles of MLOps
      2. Introducing GCP services related to MLOps
    5. Exercise – leveraging pre-built GCP models as a service 
      1. Uploading the image to a GCS bucket
      2. Creating a detect text function in Python
    6. Exercise – using GCP in AutoML to train an ML model
    7. Exercise – deploying a dummy workflow with Vertex AI Pipeline
      1. Creating a dedicated regional GCS bucket
      2. Developing the pipeline on Python
      3. Monitoring the pipeline on the Vertex AI Pipeline console
    8. Exercise – deploying a scikit-learn model pipeline with Vertex AI
      1. Creating the first pipeline, which will result in an ML model file in GCS
      2. Running the first pipeline in Vertex AI Pipeline
      3. Creating the second pipeline, which will use the model file from the prediction results as a CSV file in GCS
      4. Running the second pipeline in Vertex AI Pipeline
    9. Summary 
  16. Section 3: Key Strategies for Architecting Top-Notch Data Pipelines
  17. Chapter 9: User and Project Management in GCP 
    1. Technical requirements 
    2. Understanding IAM in GCP 
    3. Planning a GCP project structure
      1. Understanding the GCP organization, folder, and project hierarchy
      2. Deciding how many projects we should have in a GCP organization
    4. Controlling user access to our data warehouse
      1. Use-case scenario – planninga BigQuery ACL on an e-commerce organization
      2. Column-level security in BigQuery
    5. Practicing the concept of IaC using Terraform
      1. Exercise – creating and running basic Terraform scripts
      2. Self-exercise – managing a GCP project and resources using Terraform
    6. Summary
  18. Chapter 10: Cost Strategy in GCP
    1. Technical requirements
    2. Estimating the cost of your end-to-end data solution in GCP
      1. Comparing BigQuery on-demand and flat-rate 
      2. Example – estimating data engineering use case
    3. Tips for optimizing BigQuery using partitioned and clustered tables 
      1. Partitioned tables
      2. Clustered tables
      3. Exercise – optimizing BigQuery on-demand cost
    4. Summary
  19. Chapter 11: CI/CD on Google Cloud Platform for Data Engineers
    1. Technical requirements
    2. Introduction to CI/CD
      1. Understanding the data engineer's relationship with CI/CD practices
    3. Understanding CI/CD components with GCP services
    4. Exercise – implementing continuous integration using Cloud Build
      1. Creating a GitHub repository using Cloud Source Repository
      2. Developing the code and Cloud Build scripts
      3. Creating the Cloud Build Trigger
      4. Pushing the code to the GitHub repository
    5. Exercise – deploying Cloud Composer jobs using Cloud Build
      1. Preparing the CI/CD environment
      2. Preparing the cloudbuild.yaml configuration file
      3. Pushing the DAG to our GitHub repository
      4. Checking the CI/CD result in the GCS bucket and Cloud Composer
    6. Summary
    7. Further reading
  20. Chapter 12: Boosting Your Confidence as a Data Engineer
    1. Overviewing the Google Cloud certification
      1. Exam preparation tips
      2. Extra GCP services material
    2. Quiz – reviewing all the concepts you've learned about
      1. Questions
      2. Answers
    3. The past, present, and future of Data Engineering
    4. Boosting your confidence and final thoughts
    5. Summary
    6. Why subscribe?
  21. Other Books You May Enjoy
    1. Packt is searching for authors like you
    2. Share Your Thoughts

Product information

  • Title: Data Engineering with Google Cloud Platform
  • Author(s): Adi Wijaya
  • Release date: March 2022
  • Publisher(s): Packt Publishing
  • ISBN: 9781800561328