AI Navigate

3x RTX 5090's to a single RTX Pro 6000

Reddit r/LocalLLaMA / 3/22/2026

💬 OpinionTools & Practical Usage

Key Points

  • The user currently runs local model inference on 2x RTX 5090 GPUs and is considering upgrading to handle larger models.
  • They are weighing two upgrade paths: adding a third RTX 5090 FE for more VRAM or replacing all with a single RTX Pro 6000.
  • The setup includes comfyui rendering to their openclaw stack, motivating the upgrade for larger models.
  • They plan to sell their existing Framework Desktop and DGX Spark, with the DGX Spark being returned as part of the change.
  • They are asking whether this hardware plan makes sense, indicating concern about whether they are going too far.

I've got a server with 2x RTX 5090's that does most of my inference, its plenty fast for my needs (running local models for openclaw)

I was thinking of adding another RTX 5090 FE for extra VRAM.Or alternativly selling the two that I have (5090FE I Paid MSRP for both) and moving on up to a single RTX Pro 6000.

My use case is running larger models and adding comfyui rendering to my openclawstack.

PS I already own a Framework Desktop and I just picked up an DGX Spark, The framework would get sold as well and the DGX spark would be returned.

Am I nuts for even considering this?

submitted by /u/flanconleche
[link] [comments]