AI Navigate

RTX 3060 12Gb as a second GPU

Reddit r/LocalLLaMA / 3/13/2026

💬 OpinionDeveloper Stack & InfrastructureTools & Practical Usage

Key Points

  • The author is exploring using an RTX 3060 12GB as a second GPU to support LLM inference and training on a budget for a home server.
  • They believe the 3060’s 12 GB of VRAM could be advantageous and potentially outperform CPU offloading, despite the card being older.
  • They have concerns about CUDA driver compatibility, inference engine compatibility, and inter-GPU communication when mixing architectures (3060 with a 5070 Ti).
  • They are worried about temperatures, specifically whether the 3060 can handle hot intake air from the first GPU and how to maintain safe thermals in their setup.
  • The post seeks practical community guidance on viability, compatibility, and cooling strategies for a mixed-GPU ML workflow.

Hi!

I’ve been messing around with LLMs for a while, and I recently upgraded to a 5070ti (16 GB). It feels like a breath of fresh air compared to my old 4060 (8 GB), but now I’m finding myself wanting a bit more VRAM. I’ve searched the market, and 3060 (12 GB) seems like a pretty decent option.

I know it’s an old GPU, but it should still be better than CPU offloading, right? These GPUs are supposed to be going into my home server, so I’m trying to stay on a budget. I am going to use them to inference and train models.

Do you think I might run into any issues with CUDA drivers, inference engine compatibility, or inter-GPU communication? Mixing different architectures makes me a bit nervous.

Also, I’m worried about temperatures. On my motherboard, the hot air from the first GPU would go straight into the second one. My 5070ti usually doesn’t go above 75°C under load so could 3060 be able to handle that hot intake air?

submitted by /u/catlilface69
[link] [comments]