Request a Demo
See how leading Data + AI teams achieve 34% faster productivity.
Specialization

Lakehouse Platform Engineering — Lakeflow

Production-grade Lakeflow (DLT) pipeline engineering from declarative fundamentals through CDC and streaming ingestion.

~30h·20 modules·1-2 years
MasterclassScenario

Your Skill Path

20 modules · Masterclasses, hands-on scenarios & timed mock tests

1

Lakeflow Architecture — Declarative Thinking, Pipeline Types & Execution Model

DLT / Lakeflow FundamentalsMasterclass
2

Build your first Lakeflow pipeline — define Bronze, Silver & Gold layers declaratively

DLT / Lakeflow FundamentalsScenario
3

Migrate an existing PySpark ETL notebook to a Lakeflow declarative pipeline

DLT / Lakeflow FundamentalsScenario
4

Configure pipeline modes — triggered vs continuous — for different SLA requirements

DLT / Lakeflow FundamentalsScenario
5

Data Quality with Expectations — Constraints, Warnings, Failures & Quarantine Patterns

Expectations & Data QualityMasterclass
6

Define multi-layer expectations across Bronze, Silver and Gold in a single pipeline

Expectations & Data QualityScenario
7

A pipeline passes DQ checks despite null primary keys — debug broken expectation logic

Expectations & Data QualityScenario
8

Implement quarantine pattern to isolate invalid records without halting the pipeline

Expectations & Data QualityScenario
9

CDC with APPLY CHANGES INTO — SCD Type 1, SCD Type 2 & Out-of-order Event Handling

CDC & Apply ChangesMasterclass
10

Implement SCD Type 1 for customer dimension updates using APPLY CHANGES INTO

CDC & Apply ChangesScenario
11

Implement SCD Type 2 to track full historical changes in product pricing

CDC & Apply ChangesScenario
12

CDC pipeline processes out-of-order events and corrupts dimension history — diagnose and fix

CDC & Apply ChangesScenario
13

Streaming Tables in Lakeflow — Auto Loader Integration, Triggers & Watermarking

Streaming TablesMasterclass
14

Build a streaming ingestion pipeline from cloud storage into Lakeflow using Auto Loader

Streaming TablesScenario
15

A streaming pipeline produces duplicate records after checkpoint recovery — resolve idempotency

Streaming TablesScenario
16

Pipeline watermark is too aggressive causing late-arriving records to be silently dropped — tune it

Streaming TablesScenario
17

Pipeline Observability — Event Logs, Data Lineage, Monitoring & Error Handling Patterns

Pipeline ObservabilityMasterclass
18

Debug a production pipeline failure using DLT event logs and the lineage graph

Pipeline ObservabilityScenario
19

Set up alerting and SLA monitoring for a business-critical Lakeflow pipeline

Pipeline ObservabilityScenario
20

A pipeline error message is generic — implement structured error handling patterns for production

Pipeline ObservabilityScenario

Ready to get started?

Get a walkthrough of this skill path and see how Enqurious can accelerate your growth on Databricks.

Request a Demo