True story,
I got interested in AI after seeing it at work and wanted to run models locally. I started with an M3 Ultra 96GB, quickly learned it was not enough for what I wanted, and kept upgrading hardware (including refurbished Mac Studios at 256GB/512GB and now an RTX Pro 6000 that arrived today). I tested many model families (Qwen, DeepSeek, Gemma, Minimax, etc.). My current favorite is MiniMax M2.7 230B/A10B. I’m also waiting for LM Studio support for DeepSeek v4 Flash.
I have mixed feelings: excitement about local speed/bandwidth and sadness about how much money I spent learning this stack. Also funny point: my 16GB MacBook Pro has been more stable than my 512GB setup, which crashed multiple times.
Still, I’m convinced local LLMs are the future, and this community helped me learn a lot. Thank you to everyone here.
Question for the group: For people running high-end local setups, what gave you the biggest real-world stability + speed gains (not just benchmark wins)?
If you want, I can also give you a more technical version focused on benchmarks/specs.
[link] [comments]




