I already have an RTX 5060 Ti 16GB and a 5070 Ti, but I’m wondering whether picking up a couple of Tesla V100 32GB cards could actually make sense as a value proposition specifically for larger local models.
I know the V100 is old, power-hungry, and missing newer consumer-card features, and I’m not expecting it to beat modern RTX cards for speed or general efficiency. The appeal is mostly the 32GB VRAM per card, especially if they can be found cheap enough.
Use case would be local LLM experimentation: running larger quantized models, testing longer context, maybe splitting/offloading across cards where supported. I already have newer RTX hardware for faster smaller models and image generation, so this would mainly be about getting more VRAM for less money.
Is there a point where 32GB V100s still make sense in 2026 for homelab AI, or is the age/platform/power/software support enough of a downside that I’d be better off putting the money toward a newer single GPU?
Interested in real-world experiences, especially from people who have run V100s alongside newer RTX cards.
[link] [comments]



