Book description
Get to grips with building and productionizing end-to-end big data solutions in Azure and learn best practices for working with large datasets
Key Features
- Integrate with Azure Synapse Analytics, Cosmos DB, and Azure HDInsight Kafka Cluster to scale and analyze your projects and build pipelines
- Use Databricks SQL to run ad hoc queries on your data lake and create dashboards
- Productionize a solution using CI/CD for deploying notebooks and Azure Databricks Service to various environments
Book Description
Azure Databricks is a unified collaborative platform for performing scalable analytics in an interactive environment. The Azure Databricks Cookbook provides recipes to get hands-on with the analytics process, including ingesting data from various batch and streaming sources and building a modern data warehouse.
The book starts by teaching you how to create an Azure Databricks instance within the Azure portal, Azure CLI, and ARM templates. You’ll work through clusters in Databricks and explore recipes for ingesting data from sources, including files, databases, and streaming sources such as Apache Kafka and EventHub. The book will help you explore all the features supported by Azure Databricks for building powerful end-to-end data pipelines. You'll also find out how to build a modern data warehouse by using Delta tables and Azure Synapse Analytics. Later, you’ll learn how to write ad hoc queries and extract meaningful insights from the data lake by creating visualizations and dashboards with Databricks SQL. Finally, you'll deploy and productionize a data pipeline as well as deploy notebooks and Azure Databricks service using continuous integration and continuous delivery (CI/CD).
By the end of this Azure book, you'll be able to use Azure Databricks to streamline different processes involved in building data-driven apps.
What you will learn
- Read and write data from and to various Azure resources and file formats
- Build a modern data warehouse with Delta Tables and Azure Synapse Analytics
- Explore jobs, stages, and tasks and see how Spark lazy evaluation works
- Handle concurrent transactions and learn performance optimization in Delta tables
- Learn Databricks SQL and create real-time dashboards in Databricks SQL
- Integrate Azure DevOps for version control, deploying, and productionizing solutions with CI/CD pipelines
- Discover how to use RBAC and ACLs to restrict data access
- Build end-to-end data processing pipeline for near real-time data analytics
Who this book is for
This recipe-based book is for data scientists, data engineers, big data professionals, and machine learning engineers who want to perform data analytics on their applications. Prior experience of working with Apache Spark and Azure is necessary to get the most out of this book.
Table of contents
- Azure Databricks Cookbook
- Contributors
- About the authors
- About the reviewers
- Preface
-
Chapter 1: Creating an Azure Databricks Service
- Technical requirements
- Creating a Databricks workspace in the Azure portal
- Creating a Databricks service using the Azure CLI (command-line interface)
- Creating a Databricks service using Azure Resource Manager (ARM) templates
- Adding users and groups to the workspace
- Creating a cluster from the user interface (UI)
- Getting started with notebooks and jobs in Azure Databricks
- Authenticating to Databricks using a PAT
-
Chapter 2: Reading and Writing Data from and to Various Azure Services and File Formats
- Technical requirements
- Mounting ADLS Gen2 and Azure Blob storage to Azure DBFS
- Reading and writing data from and to Azure Blob storage
- Reading and writing data from and to ADLS Gen2
- Reading and writing data from and to an Azure SQL database using native connectors
- Reading and writing data from and to Azure Synapse SQL (dedicated SQL pool) using native connectors
- Reading and writing data from and to Azure Cosmos DB
- Reading and writing data from and to CSV and Parquet
- Reading and writing data from and to JSON, including nested JSON
-
Chapter 3: Understanding Spark Query Execution
- Technical requirements
- Introduction to jobs, stages, and tasks
- Checking the execution details of all the executed Spark queries via the Spark UI
- Deep diving into schema inference
- Looking into the query execution plan
- How joins work in Spark
- Learning about input partitions
- Learning about output partitions
- Learning about shuffle partitions
- Storage benefits of different file types
- Chapter 4: Working with Streaming Data
-
Chapter 5: Integrating with Azure Key Vault, App Configuration, and Log Analytics
- Technical requirements
- Creating an Azure Key Vault to store secrets using the UI
- Creating an Azure Key Vault to store secrets using ARM templates
- Using Azure Key Vault secrets in Azure Databricks
- Creating an App Configuration resource
- Using App Configuration in an Azure Databricks notebook
- Creating a Log Analytics workspace
- Integrating a Log Analytics workspace with Azure Databricks
- Chapter 6: Exploring Delta Lake in Azure Databricks
-
Chapter 7: Implementing Near-Real-Time Analytics and Building a Modern Data Warehouse
- Technical requirements
- Understanding the scenario for an end-to-end (E2E) solution
- Creating required Azure resources for the E2E demonstration
- Simulating a workload for streaming data
- Processing streaming and batch data using Structured Streaming
- Understanding the various stages of transforming data
- Loading the transformed data into Azure Cosmos DB and a Synapse dedicated pool
- Creating a visualization and dashboard in a notebook for near-real-time analytics
- Creating a visualization in Power BI for near-real-time analytics
- Using Azure Data Factory (ADF) to orchestrate the E2E pipeline
-
Chapter 8: Azure Databricks SQL Analytics
- Technical requirements
- How to create a user in SQL Analytics
- Creating SQL endpoints
- Granting access to objects to the user
- Running SQL queries in SQL Analytics
- Using query parameters and filters
- Introduction to visualizations in SQL Analytics
- Creating dashboards in SQL Analytics
- Connecting Power BI to SQL Analytics
-
Chapter 9: DevOps Integrations and Implementing CI/CD for Azure Databricks
- Technical requirements
- How to integrate Azure DevOps with an Azure Databricks notebook
- Using GitHub for Azure Databricks notebook version control
- Understanding the CI/CD process for Azure Databricks
- How to set up an Azure DevOps pipeline for deploying notebooks
- Deploying notebooks to multiple environments
- Enabling CI/CD in an Azure DevOps build and release pipeline
- Deploying an Azure Databricks service using an Azure DevOps release pipeline
-
Chapter 10: Understanding Security and Monitoring in Azure Databricks
- Technical requirements
- Understanding and creating RBAC in Azure for ADLS Gen-2
- Creating ACLs using Storage Explorer and PowerShell
- How to configure credential passthrough
- How to restrict data access to users using RBAC
- How to restrict data access to users using ACLs
- Deploying Azure Databricks in a VNet and accessing a secure storage account
- Using Ganglia reports for cluster health
- Cluster access control
- Why subscribe?
- Other Books You May Enjoy
Product information
- Title: Azure Databricks Cookbook
- Author(s):
- Release date: September 2021
- Publisher(s): Packt Publishing
- ISBN: 9781789809718
You might also like
book
Mastering API Architecture
Most organizations with a web presence build and operate APIs; the doorway for customers to interact …
book
Azure DevOps Explained
Implement real-world DevOps and cloud deployment scenarios using Azure Repos, Azure Pipelines, and other Azure DevOps …
book
Terraform: Up and Running, 3rd Edition
Terraform has become a key player in the DevOps world for defining, launching, and managing infrastructure …
book
Snowflake: The Definitive Guide
Snowflake's ability to eliminate data silos and run workloads from a single platform creates opportunities to …