Given how good Qwen become, is it time to grab a 128gb m5 max?

Reddit r/LocalLLaMA / 4/23/2026

💬 OpinionSignals & Early TrendsTools & Practical UsageIndustry & Market Moves

Key Points

  • The author is considering upgrading from an M1 Pro with 32GB RAM to an M5 Max with 128GB because local model quality has been improving quickly.
  • They highlight Qwen’s progress, suggesting that recent 27B models are approaching the quality of OpenAI’s “4.5 Opus” benchmark mentioned in the post.
  • The post frames the upgrade as an opportunity to experiment more effectively with local LLMs rather than a direct comparison to current flagship performance.
  • Overall, the discussion reflects a “now might be the time” sentiment toward investing in higher-memory hardware for running larger local models.

I was on the fence of updating my m1 pro 32gb, but seeing how got Qwen is becoming, isnt it the time to start experimenting with local models?

My experience so far was that it never came close to opus, but i see that the 27b models are now getting close to the 4.5 opus (???), which sounds exciting!

submitted by /u/Rabus
[link] [comments]