The Snowflake Feature That Defies Common Sense

Ready to transform your data strategy with cutting-edge solutions?
Recently, I mentored a group of engineering graduates who were learning Snowflake and preparing for the SnowPro Core Certification. During a review of sample certification questions, we came across this statement :
"To optimize data loading, Snowflake recommends file sizes to be between 100-250 MB."
One of the mentees immediately asked, “Why does Snowflake recommend files of 100–250 MB? What is significant about that range?”
I explained that instead of loading a single large file, it is more efficient to split the data into multiple smaller files. This approach often results in faster load times.
The room fell silent for a moment.
Then came a thoughtful follow-up question:
“Doesn’t that create extra work? We would be loading several files instead of just one. How can that be faster?”
It’s a logical question. Mathematically, it makes perfect sense to question this approach.
But is that how Snowflake actually operates?
When I first began working with Snowflake, I had the same doubts. Over time, I learned how Snowflake’s architecture and parallel processing capabilities make this recommendation both practical and efficient.
Still not sure what I mean?
Let’s walk through a simple experiment to see it in action.
Setting Up the Workspace
Create Database and Schema
CREATE DATABASE IF NOT EXISTS DATA_LOADING; CREATE SCHEMA IF NOT EXISTS TEST; USE SCHEMA DATA_LOADING.TEST;
Setup the Warehouse
For this experiment, we will use the default warehouse COMPUTE_WH provided by Snowflake
Create a File Format
CREATE OR REPLACE file format csv_format TYPE = 'CSV' FIELD_DELIMITER = ',' SKIP_HEADER = 1;
Create an External Stage
All the required data is stored in an Azure storage container.
Create an external stage to reference these files
CREATE OR REPLACE STAGE adls_ext_stage STORAGE_INTEGRATION = adls_to_sf_integration URL = 'azure://sfcoredata.blob.core.windows.net/sfadvdata' FILE_FORMAT = csv_format;
Verify files in the Stage
-- List all files in the stage LIST @adls_ext_stage;
Red Box: Single large file
Green Box: Same dataset divided into eight smaller files
Count Total Rows
SELECT COUNT(*) as total_rows_in_file FROM @adls_ext_stage/loans.csv
Make a note of this number; we will use it later to verify our loading accuracy.
Approach 1: Loading the Single Large File
Create a Target Table
CREATE OR REPLACE TABLE loans_single_file( customer_id STRING, current_loan_amount INTEGER, term STRING, years_in_current_job STRING, home_ownership STRING, purpose STRING, monthly_debt DECIMAL(8,2), loan_id STRING );
Load Data from the External Stage
COPY INTO loans_single_file FROM @adls_ext_stage FILES = ('loans.csv')
Results:Total rows loaded: 25,000,000 (25 Million)
Total time taken: 45 seconds
Approach 2 : Loading Multiple Files
Create a Target Table
CREATE OR REPLACE TABLE loans_multiple_file( customer_id STRING, current_loan_amount INTEGER, term STRING, years_in_current_job STRING, home_ownership STRING, purpose STRING, monthly_debt DECIMAL(8,2), loan_id STRING );
Load Data from the External Stage
COPY INTO loans_multiple_file FROM @adls_ext_stage PATTERN = '.*loans_.*.csv';
Execution details:All eight files were loaded without errors.
Time taken: 15 seconds
3. Verify Row Count
SELECT COUNT(*) as total_rows_loaded_multiple_files
FROM loans_multiple_file;
Total rows loaded: 25,000,000 (25 Million)
Performance Comparison
Approach | Total Files | Rows Loaded | Execution Time |
Single File | 1 | 25 Million | 45 secs |
Multiple Files | 8 | 25 Million | 15 secs |
The results speak for themselves: loading smaller, parallel files is significantly faster.
Why Does This Happen?
An X-Small Snowflake virtual warehouse has eight cores, and each core can process one file at a time.
Single large file: Only one core is engaged, leaving the remaining seven idle.
Eight smaller files: All eight cores work simultaneously, completing the load far more quickly.
This parallelism is the reason Snowflake recommends splitting data into 100–250 MB files for efficient loading.
How Does Snowflake Know How Many Files to Process in Parallel?
Snowflake manages this behavior with the MAX_CONCURRENCY_LEVEL parameter. This setting determines how many files a warehouse can process in parallel during a COPY operation.
You can view the current setting for your warehouse with:
SHOW PARAMETERS IN WAREHOUSE COMPUTE_WH;
In our experiment, the eight-file approach aligned perfectly with the maximum concurrency level of the X-Small warehouse, allowing all cores to work at full capacity.
Efficient data loading in Snowflake is not just about file size; it’s about leveraging the architecture for parallel processing. By breaking large datasets into optimally sized chunks, you can:
Reduce load times dramatically
Maximize warehouse resources
Follow Snowflake’s best practices for scalable performance
The next time someone asks why Snowflake insists on those 100–250 MB files, you’ll have both the data and the explanation ready just as my mentees do now.
Want to go beyond this experiment and master Snowflake inside-out?
Check out the SnowPro Core Certification Skill Path on Enqurious Academy and start your certification journey.
Ready to Experience the Future of Data?
You Might Also Like

Master the bronze layer foundation of medallion architecture with COPY INTO - the command that handles incremental ingestion and schema evolution automatically. No more duplicate data, no more broken pipelines when new columns arrive. Your complete guide to production-ready raw data ingestion

Learn Git and GitHub step by step with this complete guide. From Git basics to branching, merging, push, pull, and resolving merge conflicts—this tutorial helps beginners and developers collaborate like pros.

Discover how data management, governance, and security work together—just like your favorite food delivery app. Learn why these three pillars turn raw data into trusted insights, ensuring trust, compliance, and business growth.

A simple request to automate Google feedback forms turned into a technical adventure. From API roadblocks to a smart Google Apps Script pivot, discover how we built a seamless system that cut form creation time from 20 minutes to just 2.

Step-by-step journey of setting up end-to-end AKS monitoring with dashboards, alerts, workbooks, and real-world validations on Azure Kubernetes Service.

My learning experience tracing how an app works when browser is refreshed

This is the first in a five-part series detailing my experience implementing advanced data engineering solutions with Databricks on Google Cloud Platform. The series covers schema evolution, incremental loading, and orchestration of a robust ELT pipeline.

Discover the 7 major stages of the data engineering lifecycle, from data collection to storage and analysis. Learn the key processes, tools, and best practices that ensure a seamless and efficient data flow, supporting scalable and reliable data systems.

This blog is troubleshooting adventure which navigates networking quirks, uncovers why cluster couldn’t reach PyPI, and find the real fix—without starting from scratch.

Explore query scanning can be optimized from 9.78 MB down to just 3.95 MB using table partitioning. And how to use partitioning, how to decide the right strategy, and the impact it can have on performance and costs.

Dive deeper into query design, optimization techniques, and practical takeaways for BigQuery users.

Wondering when to use a stored procedure vs. a function in SQL? This blog simplifies the differences and helps you choose the right tool for efficient database management and optimized queries.

This blog talks about the Power Law statistical distribution and how it explains content virality

Discover how BigQuery Omni and BigLake break down data silos, enabling seamless multi-cloud analytics and cost-efficient insights without data movement.

In this article we'll build a motivation towards learning computer vision by solving a real world problem by hand along with assistance with chatGPT

This blog explains how Apache Airflow orchestrates tasks like a conductor leading an orchestra, ensuring smooth and efficient workflow management. Using a fun Romeo and Juliet analogy, it shows how Airflow handles timing, dependencies, and errors.

The blog underscores how snapshots and Point-in-Time Restore (PITR) are essential for data protection, offering a universal, cost-effective solution with applications in disaster recovery, testing, and compliance.

The blog contains the journey of ChatGPT, and what are the limitations of ChatGPT, due to which Langchain came into the picture to overcome the limitations and help us to create applications that can solve our real-time queries

This blog simplifies the complex world of data management by exploring two pivotal concepts: Data Lakes and Data Warehouses.

An account of experience gained by Enqurious team as a result of guiding our key clients in achieving a 100% success rate at certifications

demystifying the concepts of IaaS, PaaS, and SaaS with Microsoft Azure examples

Discover how Azure Data Factory serves as the ultimate tool for data professionals, simplifying and automating data processes

Revolutionizing e-commerce with Azure Cosmos DB, enhancing data management, personalizing recommendations, real-time responsiveness, and gaining valuable insights.

Highlights the benefits and applications of various NoSQL database types, illustrating how they have revolutionized data management for modern businesses.

This blog delves into the capabilities of Calendar Events Automation using App Script.

Dive into the fundamental concepts and phases of ETL, learning how to extract valuable data, transform it into actionable insights, and load it seamlessly into your systems.

An easy to follow guide prepared based on our experience with upskilling thousands of learners in Data Literacy

Teaching a Robot to Recognize Pastries with Neural Networks and artificial intelligence (AI)

Streamlining Storage Management for E-commerce Business by exploring Flat vs. Hierarchical Systems

Figuring out how Cloud help reduce the Total Cost of Ownership of the IT infrastructure

Understand the circumstances which force organizations to start thinking about migration their business to cloud