Do cheap 32GB V100s still make sense for homelab AI?

Reddit r/LocalLLaMA / 5/5/2026

💬 OpinionIdeas & Deep AnalysisTools & Practical Usage

Key Points

  • The author is considering buying cheap Tesla V100 32GB GPUs for a homelab to run larger local LLMs, primarily to gain more VRAM per card rather than expecting top speed or efficiency versus modern RTX.
  • Proposed use cases include experimenting with larger quantized models, testing longer context lengths, and potentially splitting/offloading workloads across multiple cards.
  • They acknowledge key drawbacks of the V100: it is power-hungry, older hardware, and it lacks some modern consumer-GPU features.
  • The post asks whether V100 32GB cards are still a cost-effective choice in 2026 for homelab AI compared with investing the same money in a newer single GPU.
  • It seeks real-world experiences from users who have run V100 alongside newer RTX cards to evaluate performance, software support, and practicality.

I already have an RTX 5060 Ti 16GB and a 5070 Ti, but I’m wondering whether picking up a couple of Tesla V100 32GB cards could actually make sense as a value proposition specifically for larger local models.

I know the V100 is old, power-hungry, and missing newer consumer-card features, and I’m not expecting it to beat modern RTX cards for speed or general efficiency. The appeal is mostly the 32GB VRAM per card, especially if they can be found cheap enough.

Use case would be local LLM experimentation: running larger quantized models, testing longer context, maybe splitting/offloading across cards where supported. I already have newer RTX hardware for faster smaller models and image generation, so this would mainly be about getting more VRAM for less money.

Is there a point where 32GB V100s still make sense in 2026 for homelab AI, or is the age/platform/power/software support enough of a downside that I’d be better off putting the money toward a newer single GPU?

Interested in real-world experiences, especially from people who have run V100s alongside newer RTX cards.

submitted by /u/SKX007J1
[link] [comments]