Whats the latest status on 7900xtx multi-GPU setups?

Reddit r/LocalLLaMA / 5/1/2026

💬 OpinionDeveloper Stack & InfrastructureSignals & Early TrendsTools & Practical Usage

Key Points

  • The post discusses current multi-GPU setup options using AMD Radeon RX 7900 XTX versus NVIDIA GPUs, focusing on pricing and resale flexibility in a home lab context.
  • The author notes that the 7900 XTX has strong headline specifications (similar VRAM and memory bandwidth, higher TFLOPS) but lacks NVLink, which can complicate seamless multi-GPU scaling.
  • A key question is whether multi-GPU techniques such as tensor parallelism are now supported by serving frameworks like vLLM and others.
  • The discussion acknowledges that AMD’s software ecosystem has historically lagged NVIDIA but suggests that “catch-up” is actively underway, implying improving support over time.
  • Overall, the post frames the 7900 XTX as potentially attractive for local LLM workflows if multi-GPU parallelism is adequately supported in the software stack.

I am currently running dual RTX 5060 ti 16gb (both of which are easy to sell or re-use in other PCs at home) and monitoring the used market for more of the same and alternatively RTX 3090. I couldn't help but notice that sometimes some quite "juicy" prices show up for 7900xtx (50-60% of RTX 3090 price used).

I know that AMD maturity has lagged behind, but also that catch up is being actively worked on. The 7900xtx has some pretty nice stats overall (same memory bandwidth, same VRAM and much higher TFLOPS, but lacking NVLink of course).

Is tensor parallelism etc supported by now in e.g. vllm and others?

submitted by /u/ziphnor
[link] [comments]