8x2080TI 22GB a good idea?

Reddit r/LocalLLaMA / 3/23/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisTools & Practical Usage

Key Points

  • The author is considering adding six more 2080Ti 22GB GPUs to an ESC8000A-E12 server (currently running 2x 2080Ti) to fill all eight slots and reach 176GB of VRAM for under $2k.
  • They acknowledge that the Turing-based 2080Ti lacks BF16/FA2 support and may have longevity and compatibility issues with modern LLM workloads.
  • An alternative like the RTX 5060Ti 16GB offers better per-GPU performance due to newer architecture but would cost roughly twice as much and require discarding the two existing GPUs.
  • A single RTX 4090 with 48GB could be appealing but would cost more than the entire 8x2080Ti plan, raising questions about cost-per-performance and total memory for their use case.
  • The post is seeking suggestions and guidance from the community on the best path forward.

Ok so hear me out, I have a rather unique situation here and wants some good recommendations.

I currently have a server (ESC8000A-E12) that's designed to host 8xH100, it's already set up and working with 2x2080TI with 22GB of mod. I got this very long ago during the stable diffusion era and the idea of running LLMs (ChatGPT was just a thing back then) on this never crossed my mind.

Jump to the present and everyone is deploying LLMs on their local hardware, and I'm currently thinking about "finishing" the machine by filling out the last 6 GPU slots. I have access to reliable supplies of 2080TI 22GB for ~$290 each. Giving me 176GB of VRAM for just under $2K.

However, I do understand that Turing is a very old architecture that doesn't even support BF16 (only FP16) or FA2. I've browsed on this reddit for some time looking for alternative solutions to compare. The best one I have is the 5060ti 16GB, which because of the FP4 support and better architecture, you could get a better per-GPU performance. But a 5060ti 16GB costs twice as much as the 2080TI 22GB, plus I would need to discard and replace the two I currently have. Yet I'm also concerned about the longevity of this, if support for Turing continue to degrade.

A 4090 with 48GB sounds good but a single one alone would cost me more than 8x2080ti 22GB.

Open to any suggestions, thanks in advance!

submitted by /u/PossiblePossible2571
[link] [comments]