Troubleshooting Pip Installation Issues on Dataproc with Internal IP Only

Ready to transform your data strategy with cutting-edge solutions?
The Issue
Cloud platforms may seem like a world of quick clicks and easy deployments, but every checkbox you tick (or forget to untick) can completely change your experience. And guess what? That’s exactly what happened to me.
I was happily setting up my Cloud Dataproc cluster, confidently clicking through the configuration, until… boom! I left one wrong box checked—Internal IP only:
and suddenly, pip install psycopg2 turned into a nightmare.
Running:
pip install psycopg2
Resulted in:
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7f12f19a5810>: Failed to establish a new connection: [Errno 101] Network is unreachable')': /simple/psycopg2/
No matter how many times I tried, it just wouldn’t work.
Root Cause
After much frustration (and some strong coffee), I realized that my Dataproc cluster was isolated from the internet due to the Internal IP only setting. This meant my poor nodes were trapped in a private network, unable to reach PyPI to download packages.
Why Did This Happen?
Dataproc clusters with Internal IP only are designed for security—keeping them safe from the wild internet. But that also means they can’t talk to PyPI, so pip install has nowhere to go.
Symptoms
Pip install fails with network errors.
Retrying doesn’t help (but it does test your patience).
Other internet-dependent commands (wget, curl) also don’t work.
Spinning up another Dataproc cluster without Internal IP only magically fixes everything.
What’s the fix?
Now, I could have just rebuilt my cluster with the right settings, but where’s the fun in that? Instead, here’s how you can fix this without starting over.
Option 1: Use a Different Cluster (Easy Way Out)
If security isn’t a major concern, just create a new cluster without the Internal IP only setting. But hey, what’s the lesson in that?
Option 2: Enable Cloud NAT (Best Solution for Security)
You can maintain security and get internet access by setting up Cloud NAT:
Go to Google Cloud Console → VPC Network → Cloud NAT.
Create a new NAT gateway:
Select your VPC where Dataproc is running.
Choose the subnet for your cluster.
Enable NAT for all subnets if needed.
Save and apply the settings.
After a few minutes, try pip install again—it should work like magic!
Option 3: Use an Internal PyPI Mirror (For the Hardcore Users)
Pre-download required .whl files on an internet-enabled machine and upload them to Google Cloud Storage (GCS).
Use Google Artifact Registry to host Python packages internally.
Configure pip to install from these internal sources instead of PyPI.
Option 4: Manually Transfer the Package
On an internet-enabled machine, download the package:
pip download psycopg2
Upload the downloaded .whl file to Google Cloud Storage.
SSH into the Dataproc cluster and install it manually:
pip install gs://your-bucket-name/psycopg2.whl
Conclusion
So here’s the moral of the story: Cloud computing is not just a few clicks—every checkbox matters. One tiny mistake (like leaving Internal IP only checked) can send you down a rabbit hole of troubleshooting my dataproc workflow.
While Internal IP only is great for security, you need to plan ahead for dependencies like pip install. Either set up Cloud NAT, use an internal package repo, or get comfortable manually transferring files.
Next time, I’ll double-check all my checkboxes while deploying anything on cloud.
Ready to Experience the Future of Data?
You Might Also Like

This is the first in a five-part series detailing my experience implementing advanced data engineering solutions with Databricks on Google Cloud Platform. The series covers schema evolution, incremental loading, and orchestration of a robust ELT pipeline.

Discover the 7 major stages of the data engineering lifecycle, from data collection to storage and analysis. Learn the key processes, tools, and best practices that ensure a seamless and efficient data flow, supporting scalable and reliable data systems.

Explore query scanning can be optimized from 9.78 MB down to just 3.95 MB using table partitioning. And how to use partitioning, how to decide the right strategy, and the impact it can have on performance and costs.

Dive deeper into query design, optimization techniques, and practical takeaways for BigQuery users.

Wondering when to use a stored procedure vs. a function in SQL? This blog simplifies the differences and helps you choose the right tool for efficient database management and optimized queries.

This blog talks about the Power Law statistical distribution and how it explains content virality

Discover how BigQuery Omni and BigLake break down data silos, enabling seamless multi-cloud analytics and cost-efficient insights without data movement.

In this article we'll build a motivation towards learning computer vision by solving a real world problem by hand along with assistance with chatGPT

This blog explains how Apache Airflow orchestrates tasks like a conductor leading an orchestra, ensuring smooth and efficient workflow management. Using a fun Romeo and Juliet analogy, it shows how Airflow handles timing, dependencies, and errors.

The blog underscores how snapshots and Point-in-Time Restore (PITR) are essential for data protection, offering a universal, cost-effective solution with applications in disaster recovery, testing, and compliance.

The blog contains the journey of ChatGPT, and what are the limitations of ChatGPT, due to which Langchain came into the picture to overcome the limitations and help us to create applications that can solve our real-time queries

This blog simplifies the complex world of data management by exploring two pivotal concepts: Data Lakes and Data Warehouses.

An account of experience gained by Enqurious team as a result of guiding our key clients in achieving a 100% success rate at certifications

demystifying the concepts of IaaS, PaaS, and SaaS with Microsoft Azure examples

Discover how Azure Data Factory serves as the ultimate tool for data professionals, simplifying and automating data processes

Revolutionizing e-commerce with Azure Cosmos DB, enhancing data management, personalizing recommendations, real-time responsiveness, and gaining valuable insights.

Highlights the benefits and applications of various NoSQL database types, illustrating how they have revolutionized data management for modern businesses.

This blog delves into the capabilities of Calendar Events Automation using App Script.

Dive into the fundamental concepts and phases of ETL, learning how to extract valuable data, transform it into actionable insights, and load it seamlessly into your systems.

An easy to follow guide prepared based on our experience with upskilling thousands of learners in Data Literacy

Teaching a Robot to Recognize Pastries with Neural Networks and artificial intelligence (AI)

Streamlining Storage Management for E-commerce Business by exploring Flat vs. Hierarchical Systems

Figuring out how Cloud help reduce the Total Cost of Ownership of the IT infrastructure

Understand the circumstances which force organizations to start thinking about migration their business to cloud