Building Bronze Layer: Using COPY INTO in Databricks

Ready to transform your data strategy with cutting-edge solutions?
The Challenge
John, a Junior Data Engineer at the GlobalMart Data Engineering team, started his day with a new task from his lead: "Build the bronze layer to store raw customer data in Delta tables."
Every day, new CSV files containing customer data land in a folder structure like this:
Since the bronze layer is meant to hold raw, untransformed data, John's responsibility was ensuring all incoming files were ingested as-is into a Customers Delta table. But as he started thinking about the task, two critical concerns came to mind:
Key Challenges
Incremental Ingestion → How do we ensure that only new files are processed daily without reloading files already ingested?
Schema Evolution → What if tomorrow's file arrives with additional columns? How can the pipeline automatically merge the new schema with the existing table without breaking?
John realized that while the task sounds simple, the solution needs to be scalable, reliable, and future-proof.
Why Traditional Approaches Fall Short
Excited but nervous, John searched the internet and found two common techniques: INSERT INTO and INSERT OVERWRITE. But both came with serious limitations:
INSERT INTO Problems
Simply appends data without preventing duplicates
If the same files are processed again, duplicates will accumulate
Requires manual tracking to avoid reloading old files
Inefficient for incremental loads
INSERT OVERWRITE Problems
Replaces the entire table every time
Causes unnecessary data loss or reprocessing
No flexibility to update only new records
Poor performance when working with large datasets
John realized that relying on either approach could cause major issues for GlobalMart's growing data volumes.
Discovering COPY INTO: The Game Changer
Still searching for a better approach, John discovered COPY INTO, a command built exactly for scenarios like his. It addressed his two key concerns: incremental ingestion and schema evolution.
Setting Up the Proof of Concept
To get started with his proof of concept, John prepared smaller chunks of the customer dataset and uploaded them to the data lake. The data lake now contained two initial files: customers_01092025.csv and customers_02092025.csv.
Before using COPY INTO, he wanted to validate the data by loading each CSV file individually and checking the record counts.
Initial File Validation
customers_01092025.csv: 5 records
customers_02092025.csv: 6 records
First COPY INTO Attempt
With the files verified, John was ready to test the ingestion. He used the COPY INTO command to load data from the raw zone into a Delta table in the bronze layer:
%sql
COPY INTO gbmart.bronze.customerscopyintotest
FROM '/mnt/mskltestgbmart/copyintotest'
FILEFORMAT = CSV
FORMAT_OPTIONS ('header'='true','inferSchema'='true')
COPY_OPTIONS ('mergeSchema'='true')
Command Breakdown
%sql → Runs the cell in SQL mode inside Databricks
COPY INTO gbmart.bronze.customerscopyintotest → Loads data into the Delta table named customerscopyintotest in the bronze schema of the gbmart database
FROM '/mnt/mskltestgbmart/copyintotest' → Points to the source folder in the data lake where all raw CSV files land
FILEFORMAT = CSV → Defines that input files are in CSV format
FORMAT_OPTIONS ('header'='true','inferSchema'='true'):
header = true → First row contains column names
inferSchema = true → Databricks automatically detects data types (string, int, date, etc.)
COPY_OPTIONS ('mergeSchema'='true') → Ensures schema evolution - if a new file arrives with extra columns, the Delta table automatically adds those columns instead of failing
In simple terms: This command tells Databricks to take all CSV files from the raw folder, read them with headers, automatically detect and merge schema changes, and load only new files into the bronze Delta table.
Resolving the Initial Error
When John first executed the COPY INTO command, he encountered this error:
Why this happened: COPY INTO requires the destination Delta table to exist beforehand - either with a defined schema or as an empty Delta table.
The Fix
John created the table first:
%sql
CREATE TABLE IF NOT EXISTS gbmart.bronze.customerscopyintotest;
Then re-ran the COPY INTO command:
%sql
COPY INTO gbmart.bronze.customerscopyintotest
FROM 'abfss://test@adlsstoragedata01.dfs.core.windows.net/testcopyinto'
FILEFORMAT = CSV
FORMAT_OPTIONS ('header'='true','inferSchema'='true')
COPY_OPTIONS ('mergeSchema'='true')
Result: Success! The output showed 11 records ingested (5 + 6), confirming that both initial files were loaded into the bronze table.
How COPY INTO Works: The Magic Behind the Scenes
Step 1: Directory Scanning
When John runs COPY INTO on a folder path, Databricks automatically scans the directory in the data lake (Azure Data Lake, S3, or ADLS) to discover all files matching the specified format (CSV in this case).
Step 2: File Tracking
COPY INTO maintains an internal record of which files have already been processed. This means if customers_01092025.csv was loaded yesterday, it won't be reloaded today, preventing duplicates.
Step 3: Incremental Load
When new files like customers_02092025.csv or customers_03092025.csv land in the same folder, COPY INTO automatically picks up only those new files and appends them to the Delta table.
Step 4: Schema Evolution
If tomorrow's file includes additional columns (like phone_number), the command seamlessly merges that new schema into the Delta table without breaking the pipeline.
Testing Incremental Ingestion
After successfully ingesting the first two files, John uploaded a new file (customers_03092025.csv) to verify incremental behavior.
New File Validation
customers_03092025.csv: 5 records
When he re-ran the same COPY INTO command:
Results:
Previously ingested files were skipped
Only the new file with 5 records was picked up
This confirmed that COPY INTO uses directory listing to scan the folder, identify new files, and load only those while ignoring previously processed files.
Testing Schema Evolution
Next, John tested a real-world scenario: schema evolution. In practice, incoming files might have additional columns. To simulate this, he added a fourth file (customers_04092025.csv) with additional columns to test how COPY INTO handles schema changes.
The data lake now contained four files, with the latest one having an expanded schema:
Fourth File Validation
customers_04092025.csv: 5 records
New Schema (customers_04092025.csv)
The file contained 5 records with these additional fields:
CustomerID: string
FirstName: string
LastName: string
Email: string
PhoneNumber: double
DateOfBirth: date
RegistrationDate: date
PreferredPaymentMethodID: string
third_party_data_sharing: string
marketing_communication: string
cookies_tracking: string
consent_timestamp: string
Previous Schema
The original files only had:
CustomerID: string
FirstName: string
LastName: string
Email: string
PhoneNumber: double
DateOfBirth: date
RegistrationDate: date
PreferredPaymentMethodID: string
Schema Evolution in Action
When John ingested the fourth file using the same command (with mergeSchema=true) in FORMAT_OPTIONS, the new columns were successfully merged into the Delta table without errors.
FORMAT_OPTIONS ('mergeSchema'='true') handles schema conflicts during file reading
COPY_OPTIONS ('mergeSchema'='true') handles schema evolution during table writing
Final Result: The table now contained 21 rows with the evolved schema, demonstrating COPY INTO's ability to handle schema changes automatically.
Key Insights and Best Practices
By the end of his proof of concept, John learned important lessons about COPY INTO:
1. Idempotency at File Level, Not Record Level
COPY INTO tracks file names, not the data inside files
If the same file is reprocessed, it will be skipped - even if the contents have changed
This is acceptable for the bronze layer since its purpose is to preserve raw files exactly as they arrive
2. File Name Uniqueness Matters
If a file with the same name but different content is placed in the folder, the newer version will be ignored
To ensure all data is captured, each file should have a unique name when landing in the data lake
Consider using timestamps or sequence numbers in file names
3. Performance Benefits
Only processes new files, reducing compute costs
No need for complex file tracking mechanisms
Built-in deduplication at the file level
4. Error Handling
Always create the target table before running COPY INTO
Use mergeSchema=true for dynamic schema handling
Monitor for corrupt or malformed files that might cause failures
Production Recommendations
Based on John's findings, here are recommendations for production implementation:
File Naming Convention: Use patterns like tablename_YYYYMMDD_HHMMSS.csv to ensure uniqueness
Monitoring: Set up alerts for failed COPY INTO operations
Schema Validation: Consider adding data quality checks after ingestion
Partitioning: For large datasets, consider partitioning the Delta table by date or other relevant columns
Retention Policy: Implement lifecycle management for both raw files and Delta table versions
Conclusion
COPY INTO proved to be the ideal solution for John's bronze layer requirements, providing:
Automatic incremental ingestion without duplicates
Schema evolution support for changing data structures
Simplified operations with minimal maintenance overhead
Cost efficiency by processing only new files
This approach enables GlobalMart to build a robust, scalable data lake foundation that can adapt to changing business requirements while maintaining data integrity and operational efficiency.
Ready to Experience the Future of Data?
You Might Also Like

Learn Git and GitHub step by step with this complete guide. From Git basics to branching, merging, push, pull, and resolving merge conflicts—this tutorial helps beginners and developers collaborate like pros.

Discover how data management, governance, and security work together—just like your favorite food delivery app. Learn why these three pillars turn raw data into trusted insights, ensuring trust, compliance, and business growth.

A simple request to automate Google feedback forms turned into a technical adventure. From API roadblocks to a smart Google Apps Script pivot, discover how we built a seamless system that cut form creation time from 20 minutes to just 2.

Step-by-step journey of setting up end-to-end AKS monitoring with dashboards, alerts, workbooks, and real-world validations on Azure Kubernetes Service.

My learning experience tracing how an app works when browser is refreshed

This is the first in a five-part series detailing my experience implementing advanced data engineering solutions with Databricks on Google Cloud Platform. The series covers schema evolution, incremental loading, and orchestration of a robust ELT pipeline.

Discover the 7 major stages of the data engineering lifecycle, from data collection to storage and analysis. Learn the key processes, tools, and best practices that ensure a seamless and efficient data flow, supporting scalable and reliable data systems.

This blog is troubleshooting adventure which navigates networking quirks, uncovers why cluster couldn’t reach PyPI, and find the real fix—without starting from scratch.

Explore query scanning can be optimized from 9.78 MB down to just 3.95 MB using table partitioning. And how to use partitioning, how to decide the right strategy, and the impact it can have on performance and costs.

Dive deeper into query design, optimization techniques, and practical takeaways for BigQuery users.

Wondering when to use a stored procedure vs. a function in SQL? This blog simplifies the differences and helps you choose the right tool for efficient database management and optimized queries.

This blog talks about the Power Law statistical distribution and how it explains content virality

Discover how BigQuery Omni and BigLake break down data silos, enabling seamless multi-cloud analytics and cost-efficient insights without data movement.

In this article we'll build a motivation towards learning computer vision by solving a real world problem by hand along with assistance with chatGPT

This blog explains how Apache Airflow orchestrates tasks like a conductor leading an orchestra, ensuring smooth and efficient workflow management. Using a fun Romeo and Juliet analogy, it shows how Airflow handles timing, dependencies, and errors.

The blog underscores how snapshots and Point-in-Time Restore (PITR) are essential for data protection, offering a universal, cost-effective solution with applications in disaster recovery, testing, and compliance.

The blog contains the journey of ChatGPT, and what are the limitations of ChatGPT, due to which Langchain came into the picture to overcome the limitations and help us to create applications that can solve our real-time queries

This blog simplifies the complex world of data management by exploring two pivotal concepts: Data Lakes and Data Warehouses.

An account of experience gained by Enqurious team as a result of guiding our key clients in achieving a 100% success rate at certifications

demystifying the concepts of IaaS, PaaS, and SaaS with Microsoft Azure examples

Discover how Azure Data Factory serves as the ultimate tool for data professionals, simplifying and automating data processes

Revolutionizing e-commerce with Azure Cosmos DB, enhancing data management, personalizing recommendations, real-time responsiveness, and gaining valuable insights.

Highlights the benefits and applications of various NoSQL database types, illustrating how they have revolutionized data management for modern businesses.

This blog delves into the capabilities of Calendar Events Automation using App Script.

Dive into the fundamental concepts and phases of ETL, learning how to extract valuable data, transform it into actionable insights, and load it seamlessly into your systems.

An easy to follow guide prepared based on our experience with upskilling thousands of learners in Data Literacy

Teaching a Robot to Recognize Pastries with Neural Networks and artificial intelligence (AI)

Streamlining Storage Management for E-commerce Business by exploring Flat vs. Hierarchical Systems

Figuring out how Cloud help reduce the Total Cost of Ownership of the IT infrastructure

Understand the circumstances which force organizations to start thinking about migration their business to cloud