Have the GB10 devices become the current "best value" for LLMs?

Reddit r/LocalLLaMA / 4/9/2026

💬 OpinionSignals & Early TrendsTools & Practical UsageIndustry & Market Moves

Key Points

  • The post asks whether GB10 hardware variants are currently the best cost/value option for running LLMs locally given high prices and availability constraints for alternatives like NVIDIA 3090 GPUs and backordered Macs.
  • It notes that building a server is expensive due to current memory and storage prices, and the author wants to avoid additional time spent on driver and compatibility issues with uncertain AMD/Intel product status.
  • The author is considering whether to purchase now or wait for upcoming M5 releases in 2–4 months, despite the risk of falling behind in a fast-moving ecosystem.
  • Overall, the discussion focuses on practical purchasing decisions for local LLM experimentation rather than on new model releases or technical training advances.

I want to buy some real hardware because I feel like I'm falling behind. 3090s are >$1000 on ebay, and building out the server would be very expensive with current memory and storage prices. Macs are backordered for the next 5 months. I have no idea on the status of AMD products or Intel, but I don't want to fight driver and compatibility issues on top of trying to get models and harnesses running.

Are the GB10 variants the best value if you want to buy now? Is it better to try to wait on the M5 releases in 2-4 months? That seems like forever in today's fast-moving environment.

submitted by /u/DiscombobulatedAdmin
[link] [comments]