What starts to become possible with two 3090s that wasn't with just one?

Reddit r/LocalLLaMA / 4/19/2026

💬 OpinionIdeas & Deep AnalysisTools & Practical Usage

Key Points

  • The post discusses a practical curiosity: what additional capabilities emerge when running two NVIDIA RTX 3090 GPUs instead of just one, in the context of using Qwen 3.6.
  • It frames the discussion around local LLM usage, suggesting that multi-GPU setups can unlock new performance and workload possibilities.
  • The author’s takeaway is driven by successful experience with Qwen 3.6 and curiosity about the limits and benefits of scaling GPU count.
  • The content is presented as a Reddit inquiry rather than a formal benchmark or confirmed result.

qwen 3.6 has been working great and has got me wondering.

submitted by /u/GotHereLateNameTaken
[link] [comments]