AI Navigate

[D] What's the modern workflow for managing CUDA versions and packages across multiple ML projects?

Reddit r/MachineLearning / 3/12/2026

💬 OpinionDeveloper Stack & InfrastructureTools & Practical Usage

Key Points

  • The author is an ML engineer who relies on conda for both Python dependencies and system-level packages like CUDA, but finds conda slow, prone to unwanted updates, and difficult to manage across multiple projects.
  • They face additional complexity from the need for older Linux kernels in some projects, and consider Docker as a way to isolate system-level dependencies.
  • They recognize uv as a fast Python package manager that excels for Python packages but cannot replace Docker for CUDA and other system libraries.
  • The proposed workflow explored is using Docker to handle system-level dependencies (CUDA versions, kernel constraints) and uv inside containers for Python packages to achieve fully isolated, reproducible project environments, and they are seeking community best practices for multi-project workflows.

Hello everyone,

I'm a relatively new ML engineer and so far I've been using conda for dependency management. The best thing about conda was that it allowed me to install system-level packages like CUDA into isolated environments, which was a lifesaver since some of my projects require older CUDA versions.

That said, conda has been a pain in other ways. Package installations are painfully slow, it randomly updates versions I didn't want it to touch and breaks other dependencies in the process, and I've had to put a disproportionate amount of effort into getting it to do exactly what I wanted.

I also ran into cases where some projects required an older Linux kernel, which added another layer of complexity. I didn't want to spin up multiple WSL instances just for that, and that's when I first heard about Docker.

More recently I've been hearing a lot about uv as a faster, more modern Python package manager. From what I can tell it's genuinely great for Python packages but doesn't handle system-level installations like CUDA, so it doesn't fully replace what conda was doing for me.

I can't be the only one dealing with this. To me it seems that the best way to go about this is to use Docker to handle system-level dependencies (CUDA version, Linux environment, system libraries) and uv to handle Python packages and environments inside the container. That way each project gets a fully isolated, reproducible environment.

But I'm new to this and don't want to commit to a workflow based on my own assumptions. I'd love to hear from more experienced engineers what their day-to-day workflow for multiple projects looks like.

submitted by /u/sounthan1
[link] [comments]