| Let’s build the biggest ever DGX Spark Cluster at home. This is going into my home lab server rack, 2TB of unified memory. • 16x Sparks • 1x 200Gbps FS 24 x 200Gb QSFP56 Switch • 16x QSFP56 DAC cables Should be all setup by tomorrow afternoon, what should I run? [link] [comments] |
16x DGX Sparks - What should I run?
Reddit r/LocalLLaMA / 4/29/2026
💬 OpinionDeveloper Stack & InfrastructureTools & Practical Usage
Key Points
- The post proposes building an extremely large home lab GPU cluster using 16× DGX Spark units with 2TB of unified memory.
- The builder is planning the high-speed networking setup, including a 200Gbps FS switch with QSFP56 connectivity and corresponding DAC cables.
- The author asks the community what workloads or models they should run on the finished cluster.
- The content is framed as a hands-on, practical planning question within the Local LLaMA community rather than reporting a new product release.
Related Articles

Black Hat USA
AI Business

Remote agents in Vibe. Powered by Mistral Medium 3.5.ProductIntroducing Mistral Medium 3.5, remote coding agents in Vibe, plus new Work mode in Le Chat for complex tasks.
Mistral AI Blog

15 Lead Magnet Ideas That Actually Convert in 2026
Dev.to
1.14.4a2
CrewAI Releases

Local AI vs. Cloud AI: When to Use Which (A Developer's Guide)
Dev.to