When working on Python projects, especially in data science, managing package dependencies is crucial. Different projects often require different versions of libraries like NumPy, Pandas, or scikit-learn. Installing all packages globally can lead to version conflicts and break other projects.
That’s where virtual environments come in! They create an isolated workspace with its own Python and packages—just for your project.
Open your terminal or command prompt and move into your project folder:
cd path/to/your/project
Use the built-in venv
module:
python -m venv env
This creates a new folder called env
containing the isolated environment.
On Windows:
.\env\Scripts\activate
On macOS/Linux:
source env/bin/activate
You'll know it's activated when the terminal prompt shows (env)
in front of it.
Now that you're inside the virtual environment, install your project’s dependencies like normal:
pip install numpy pandas matplotlib
requirements.txt
To make your project reproducible, you should freeze the current list of installed packages:
pip freeze > requirements.txt
This creates a file named requirements.txt
in your project folder, which might look like this:
numpy==1.24.2
pandas==2.1.0
matplotlib==3.7.1
To recreate the same environment later or on another machine:
Create and activate a new virtual environment
Run:
pip install -r requirements.txt
This installs all packages listed in the file—same versions, no surprises.
Name your environments something meaningful (e.g., venv
, ds-env
, myproject-env
)
Add env/
to .gitignore
so you don’t upload it to GitHub
Always regenerate requirements.txt
after installing/removing packages
Virtual environments and requirements.txt
files are essential tools for any serious Python developer. Whether you're building data science models or web apps, they help you keep your project organized, portable, and maintainable.
Join Anik on Peerlist!
Join amazing folks like Anik and thousands of other people in tech.
Create ProfileJoin with Anik’s personal invite link.
5
6
0