Enqurious logo
Back to blog
Guides & Tutorials

Building Bronze Layer: Using COPY INTO in Databricks

Building Bronze Layer: Using COPY INTO in Databricks blog cover image
databricks
ETL
PySpark
Sayli Nikumbh

The Challenge

John, a Junior Data Engineer at the GlobalMart Data Engineering team, started his day with a new task from his lead: "Build the bronze layer to store raw customer data in Delta tables."

Every day, new CSV files containing customer data land in a folder structure like this:

file hierarchy.png

Since the bronze layer is meant to hold raw, untransformed data, John's responsibility was ensuring all incoming files were ingested as-is into a Customers Delta table. But as he started thinking about the task, two critical concerns came to mind:

Key Challenges

  1. Incremental Ingestion → How do we ensure that only new files are processed daily without reloading files already ingested?

  2. Schema Evolution → What if tomorrow's file arrives with additional columns? How can the pipeline automatically merge the new schema with the existing table without breaking?

John realized that while the task sounds simple, the solution needs to be scalable, reliable, and future-proof.

Why Traditional Approaches Fall Short

Excited but nervous, John searched the internet and found two common techniques: INSERT INTO and INSERT OVERWRITE. But both came with serious limitations:

INSERT INTO Problems

  • Simply appends data without preventing duplicates

  • If the same files are processed again, duplicates will accumulate

  • Requires manual tracking to avoid reloading old files

  • Inefficient for incremental loads

INSERT OVERWRITE Problems

  • Replaces the entire table every time

  • Causes unnecessary data loss or reprocessing

  • No flexibility to update only new records

  • Poor performance when working with large datasets

John realized that relying on either approach could cause major issues for GlobalMart's growing data volumes.

Discovering COPY INTO: The Game Changer

Still searching for a better approach, John discovered COPY INTO, a command built exactly for scenarios like his. It addressed his two key concerns: incremental ingestion and schema evolution.

Setting Up the Proof of Concept

To get started with his proof of concept, John prepared smaller chunks of the customer dataset and uploaded them to the data lake. The data lake now contained two initial files: customers_01092025.csv and customers_02092025.csv

initialdisplayof2files.pngBefore using COPY INTO, he wanted to validate the data by loading each CSV file individually and checking the record counts.

Initial File Validation

  • customers_01092025.csv: 5 records

customer_1_5recs.png

  • customers_02092025.csv: 6 records

customer_2_6recs.png

First COPY INTO Attempt

With the files verified, John was ready to test the ingestion. He used the COPY INTO command to load data from the raw zone into a Delta table in the bronze layer:

%sql
COPY INTO gbmart.bronze.customerscopyintotest
FROM '/mnt/mskltestgbmart/copyintotest'
FILEFORMAT = CSV
FORMAT_OPTIONS ('header'='true','inferSchema'='true')
COPY_OPTIONS ('mergeSchema'='true')
Command Breakdown
  • %sql → Runs the cell in SQL mode inside Databricks

  • COPY INTO gbmart.bronze.customerscopyintotest → Loads data into the Delta table named customerscopyintotest in the bronze schema of the gbmart database

  • FROM '/mnt/mskltestgbmart/copyintotest' → Points to the source folder in the data lake where all raw CSV files land

  • FILEFORMAT = CSV → Defines that input files are in CSV format

  • FORMAT_OPTIONS ('header'='true','inferSchema'='true'):

    • header = true → First row contains column names

    • inferSchema = true → Databricks automatically detects data types (string, int, date, etc.)

  • COPY_OPTIONS ('mergeSchema'='true') → Ensures schema evolution - if a new file arrives with extra columns, the Delta table automatically adds those columns instead of failing

In simple terms: This command tells Databricks to take all CSV files from the raw folder, read them with headers, automatically detect and merge schema changes, and load only new files into the bronze Delta table.

Resolving the Initial Error

When John first executed the COPY INTO command, he encountered this error:
table_not_exists_error.png
Why this happened: COPY INTO requires the destination Delta table to exist beforehand - either with a defined schema or as an empty Delta table.

The Fix

John created the table first:

%sql

CREATE TABLE IF NOT EXISTS gbmart.bronze.customerscopyintotest;

Then re-ran the COPY INTO command:

%sql

COPY INTO gbmart.bronze.customerscopyintotest
FROM 'abfss://test@adlsstoragedata01.dfs.core.windows.net/testcopyinto'
FILEFORMAT = CSV 
FORMAT_OPTIONS ('header'='true','inferSchema'='true')
COPY_OPTIONS ('mergeSchema'='true')

Result: Success! The output showed 11 records ingested (5 + 6), confirming that both initial files were loaded into the bronze table.

total_rows_11.png

How COPY INTO Works: The Magic Behind the Scenes

Step 1: Directory Scanning

When John runs COPY INTO on a folder path, Databricks automatically scans the directory in the data lake (Azure Data Lake, S3, or ADLS) to discover all files matching the specified format (CSV in this case).

Step 2: File Tracking

COPY INTO maintains an internal record of which files have already been processed. This means if customers_01092025.csv was loaded yesterday, it won't be reloaded today, preventing duplicates.

Step 3: Incremental Load

When new files like customers_02092025.csv or customers_03092025.csv land in the same folder, COPY INTO automatically picks up only those new files and appends them to the Delta table.

Step 4: Schema Evolution

If tomorrow's file includes additional columns (like phone_number), the command seamlessly merges that new schema into the Delta table without breaking the pipeline.

Testing Incremental Ingestion

After successfully ingesting the first two files, John uploaded a new file (customers_03092025.csv) to verify incremental behavior.

cust_3_listing.png

New File Validation

  • customers_03092025.csv: 5 records

customer_3_5recs.png

When he re-ran the same COPY INTO command:

only_new_5_recs_from_cust3.png

Results:

  • Previously ingested files were skipped

  • Only the new file with 5 records was picked up

total_count_16.png

This confirmed that COPY INTO uses directory listing to scan the folder, identify new files, and load only those while ignoring previously processed files.

Testing Schema Evolution

Next, John tested a real-world scenario: schema evolution. In practice, incoming files might have additional columns. To simulate this, he added a fourth file (customers_04092025.csv) with additional columns to test how COPY INTO handles schema changes.

listing_4_custfile.png

The data lake now contained four files, with the latest one having an expanded schema:

Fourth File Validation

  • customers_04092025.csv: 5 records

customer_4_5recs.png

  • New Schema (customers_04092025.csv)

The file contained 5 records with these additional fields:

5recs_from_cust4.png

CustomerID: string
FirstName: string
LastName: string
Email: string
PhoneNumber: double
DateOfBirth: date
RegistrationDate: date
PreferredPaymentMethodID: string
third_party_data_sharing: string
marketing_communication: string
cookies_tracking: string
consent_timestamp: string
  • Previous Schema

The original files only had:

CustomerID: string
FirstName: string
LastName: string
Email: string
PhoneNumber: double
DateOfBirth: date
RegistrationDate: date
PreferredPaymentMethodID: string

Schema Evolution in Action

When John ingested the fourth file using the same command (with mergeSchema=true) in FORMAT_OPTIONS, the new columns were successfully merged into the Delta table without errors.

  • FORMAT_OPTIONS ('mergeSchema'='true') handles schema conflicts during file reading

  • COPY_OPTIONS ('mergeSchema'='true') handles schema evolution during table writing

copy_into_with_new_schema.png

Final Result: The table now contained 21 rows with the evolved schema, demonstrating COPY INTO's ability to handle schema changes automatically.

Key Insights and Best Practices

By the end of his proof of concept, John learned important lessons about COPY INTO:

1. Idempotency at File Level, Not Record Level

  • COPY INTO tracks file names, not the data inside files

  • If the same file is reprocessed, it will be skipped - even if the contents have changed

  • This is acceptable for the bronze layer since its purpose is to preserve raw files exactly as they arrive

2. File Name Uniqueness Matters

  • If a file with the same name but different content is placed in the folder, the newer version will be ignored

  • To ensure all data is captured, each file should have a unique name when landing in the data lake

  • Consider using timestamps or sequence numbers in file names

3. Performance Benefits

  • Only processes new files, reducing compute costs

  • No need for complex file tracking mechanisms

  • Built-in deduplication at the file level

4. Error Handling

  • Always create the target table before running COPY INTO

  • Use mergeSchema=true for dynamic schema handling

  • Monitor for corrupt or malformed files that might cause failures

Production Recommendations

Based on John's findings, here are recommendations for production implementation:

  1. File Naming Convention: Use patterns like tablename_YYYYMMDD_HHMMSS.csv to ensure uniqueness

  2. Monitoring: Set up alerts for failed COPY INTO operations

  3. Schema Validation: Consider adding data quality checks after ingestion

  4. Partitioning: For large datasets, consider partitioning the Delta table by date or other relevant columns

  5. Retention Policy: Implement lifecycle management for both raw files and Delta table versions

Conclusion

COPY INTO proved to be the ideal solution for John's bronze layer requirements, providing:

  • Automatic incremental ingestion without duplicates

  • Schema evolution support for changing data structures

  • Simplified operations with minimal maintenance overhead

  • Cost efficiency by processing only new files

This approach enables GlobalMart to build a robust, scalable data lake foundation that can adapt to changing business requirements while maintaining data integrity and operational efficiency.

Ready to Experience the Future of Data?

Discover how Enqurious helps deliver an end-to-end learning experience
Curious how we're reshaping the future of data? Watch our story unfold

You Might Also Like

Mastering Git & GitHub: A Complete Beginner-to-Advanced Guide blog cover image
Guides & Tutorials
September 11, 2025
Mastering Git & GitHub: A Complete Beginner-to-Advanced Guide

Learn Git and GitHub step by step with this complete guide. From Git basics to branching, merging, push, pull, and resolving merge conflicts—this tutorial helps beginners and developers collaborate like pros.

Sunil Enqurious
How Data, Governance & Security Work Like a Food Delivery App blog cover image
Guides & Tutorials
August 29, 2025
How Data, Governance & Security Work Like a Food Delivery App

Discover how data management, governance, and security work together—just like your favorite food delivery app. Learn why these three pillars turn raw data into trusted insights, ensuring trust, compliance, and business growth.

Divyanshi Enqurious
My Journey Building the Automated Feedback Forms System blog cover image
Guides & Tutorials
August 25, 2025
My Journey Building the Automated Feedback Forms System

A simple request to automate Google feedback forms turned into a technical adventure. From API roadblocks to a smart Google Apps Script pivot, discover how we built a seamless system that cut form creation time from 20 minutes to just 2.

Chethan
Mastering AKS Monitoring: Practical Journey with Azure Kubernetes Service blog cover image
Guides & Tutorials
August 21, 2025
Mastering AKS Monitoring: Practical Journey with Azure Kubernetes Service

Step-by-step journey of setting up end-to-end AKS monitoring with dashboards, alerts, workbooks, and real-world validations on Azure Kubernetes Service.

Yaseen Enqurious
Journey of an App | From Browser to Database and back blog cover image
Guides & Tutorials
August 9, 2025
Journey of an App | From Browser to Database and back

My learning experience tracing how an app works when browser is refreshed

Amit Enqurious
The Schema Evolution Challenge in Modern Data Pipelines (Part 1/5) blog cover image
Guides & Tutorials
May 10, 2025
The Schema Evolution Challenge in Modern Data Pipelines (Part 1/5)

This is the first in a five-part series detailing my experience implementing advanced data engineering solutions with Databricks on Google Cloud Platform. The series covers schema evolution, incremental loading, and orchestration of a robust ELT pipeline.

Amit Enqurious
7 Major Stages of the Data Engineering Lifecycle blog cover image
Guides & Tutorials
April 8, 2025
7 Major Stages of the Data Engineering Lifecycle

Discover the 7 major stages of the data engineering lifecycle, from data collection to storage and analysis. Learn the key processes, tools, and best practices that ensure a seamless and efficient data flow, supporting scalable and reliable data systems.

Ayushi Enqurious
Troubleshooting Pip Installation Issues on Dataproc with Internal IP Only blog cover image
Guides & Tutorials
April 3, 2025
Troubleshooting Pip Installation Issues on Dataproc with Internal IP Only

This blog is troubleshooting adventure which navigates networking quirks, uncovers why cluster couldn’t reach PyPI, and find the real fix—without starting from scratch.

Ayushi Enqurious
Optimizing Query Performance in BigQuery blog cover image
Guides & Tutorials
January 24, 2025
Optimizing Query Performance in BigQuery

Explore query scanning can be optimized from 9.78 MB down to just 3.95 MB using table partitioning. And how to use partitioning, how to decide the right strategy, and the impact it can have on performance and costs.

Ayushi Enqurious
When Partitioning and Clustering Go Wrong: Lessons from Optimizing Queries blog cover image
Guides & Tutorials
January 24, 2025
When Partitioning and Clustering Go Wrong: Lessons from Optimizing Queries

Dive deeper into query design, optimization techniques, and practical takeaways for BigQuery users.

Ayushi Enqurious
Stored Procedures vs. Functions: Choosing the Right Tool for the Job blog cover image
Guides & Tutorials
January 6, 2025
Stored Procedures vs. Functions: Choosing the Right Tool for the Job

Wondering when to use a stored procedure vs. a function in SQL? This blog simplifies the differences and helps you choose the right tool for efficient database management and optimized queries.

Divyanshi Enqurious
Understanding the Power Law Distribution blog cover image
Guides & Tutorials
January 3, 2025
Understanding the Power Law Distribution

This blog talks about the Power Law statistical distribution and how it explains content virality

Amit Enqurious
Breaking Down Data Silos with BigQuery Omni and BigLake blog cover image
Guides & Tutorials
December 23, 2024
Breaking Down Data Silos with BigQuery Omni and BigLake

Discover how BigQuery Omni and BigLake break down data silos, enabling seamless multi-cloud analytics and cost-efficient insights without data movement.

Ayushi Enqurious
Solving a Computer Vision task with AI assistance blog cover image
Guides & Tutorials
December 18, 2024
Solving a Computer Vision task with AI assistance

In this article we'll build a motivation towards learning computer vision by solving a real world problem by hand along with assistance with chatGPT

Amit Enqurious
How Apache Airflow Helps Manage Tasks, Just Like an Orchestra blog cover image
Guides & Tutorials
September 16, 2024
How Apache Airflow Helps Manage Tasks, Just Like an Orchestra

This blog explains how Apache Airflow orchestrates tasks like a conductor leading an orchestra, ensuring smooth and efficient workflow management. Using a fun Romeo and Juliet analogy, it shows how Airflow handles timing, dependencies, and errors.

Burhanuddin Enqurious
Snapshots and Point-in-Time Restore: The E-Commerce Lifesaver blog cover image
Guides & Tutorials
January 13, 2024
Snapshots and Point-in-Time Restore: The E-Commerce Lifesaver

The blog underscores how snapshots and Point-in-Time Restore (PITR) are essential for data protection, offering a universal, cost-effective solution with applications in disaster recovery, testing, and compliance.

Ayushi Enqurious
Basics of Langchain blog cover image
Guides & Tutorials
December 16, 2023
Basics of Langchain

The blog contains the journey of ChatGPT, and what are the limitations of ChatGPT, due to which Langchain came into the picture to overcome the limitations and help us to create applications that can solve our real-time queries

Burhanuddin Enqurious
Understanding Data Lakes and Data Warehouses: A Simple Guide blog cover image
Guides & Tutorials
December 8, 2023
Understanding Data Lakes and Data Warehouses: A Simple Guide

This blog simplifies the complex world of data management by exploring two pivotal concepts: Data Lakes and Data Warehouses.

Ayushi Enqurious
An L&D Strategy to achieve 100% Certification clearance blog cover image
Guides & Tutorials
December 6, 2023
An L&D Strategy to achieve 100% Certification clearance

An account of experience gained by Enqurious team as a result of guiding our key clients in achieving a 100% success rate at certifications

Amit Enqurious
Serving Up Cloud Concepts: A Pizza Lover's Guide to Understanding Tech blog cover image
Guides & Tutorials
November 2, 2023
Serving Up Cloud Concepts: A Pizza Lover's Guide to Understanding Tech

demystifying the concepts of IaaS, PaaS, and SaaS with Microsoft Azure examples

Ayushi Enqurious
Azure Data Factory: The Ultimate Prep Cook for Your Data Kitchen blog cover image
Guides & Tutorials
October 31, 2023
Azure Data Factory: The Ultimate Prep Cook for Your Data Kitchen

Discover how Azure Data Factory serves as the ultimate tool for data professionals, simplifying and automating data processes

Ayushi Enqurious
Harnessing Azure Cosmos DB APIs: Transforming E-Commerce blog cover image
Guides & Tutorials
October 26, 2023
Harnessing Azure Cosmos DB APIs: Transforming E-Commerce

Revolutionizing e-commerce with Azure Cosmos DB, enhancing data management, personalizing recommendations, real-time responsiveness, and gaining valuable insights.

Ayushi Enqurious
Unleashing the Power of NoSQL: Beyond Traditional Databases blog cover image
Guides & Tutorials
October 26, 2023
Unleashing the Power of NoSQL: Beyond Traditional Databases

Highlights the benefits and applications of various NoSQL database types, illustrating how they have revolutionized data management for modern businesses.

Ayushi Enqurious
Calendar Events Automation: Streamline Your Life with App Script Automation blog cover image
Guides & Tutorials
October 10, 2023
Calendar Events Automation: Streamline Your Life with App Script Automation

This blog delves into the capabilities of Calendar Events Automation using App Script.

Burhanuddin Enqurious
A Journey Through Extraction, Transformation, and Loading blog cover image
Guides & Tutorials
September 7, 2023
A Journey Through Extraction, Transformation, and Loading

Dive into the fundamental concepts and phases of ETL, learning how to extract valuable data, transform it into actionable insights, and load it seamlessly into your systems.

Burhanuddin Enqurious
A Simple Guide to Data Literacy blog cover image
Guides & Tutorials
June 23, 2023
A Simple Guide to Data Literacy

An easy to follow guide prepared based on our experience with upskilling thousands of learners in Data Literacy

Amit Enqurious
The Bakery Brain: Simplifying neural networks blog cover image
Guides & Tutorials
June 23, 2023
The Bakery Brain: Simplifying neural networks

Teaching a Robot to Recognize Pastries with Neural Networks and artificial intelligence (AI)

Shuchismita Enqurious
Demystifying Namespace Structures blog cover image
Guides & Tutorials
June 23, 2023
Demystifying Namespace Structures

Streamlining Storage Management for E-commerce Business by exploring Flat vs. Hierarchical Systems

Ayushi Enqurious
The Ownership Dilemma blog cover image
Guides & Tutorials
January 26, 2023
The Ownership Dilemma

Figuring out how Cloud help reduce the Total Cost of Ownership of the IT infrastructure

Amit Enqurious
Making sense of Cloud as an IT Professional blog cover image
Guides & Tutorials
January 26, 2023
Making sense of Cloud as an IT Professional

Understand the circumstances which force organizations to start thinking about migration their business to cloud

Amit Enqurious