April 2026
Intermediate to advanced
412 pages
10h 17m
English
Building and maintaining data pipelines traditionally requires significant manual effort—writing code to manage dependencies, implementing data quality checks, handling errors, and monitoring execution. As data systems grow in complexity, this imperative approach becomes increasingly difficult to maintain and scale. Databricks Lakeflow Spark Declarative Pipelines (SDP), formerly known as Delta Live Tables (DLT), takes a fundamentally different approach by allowing you to declare what you want your data to look like rather than specifying every step in producing it.
In this chapter, you will learn how to build automated, self-managing data pipelines using Lakeflow SDP. We will ...
Read now
Unlock full access