Which Version of Qwen 3.6 for M5 Pro 24g

Reddit r/LocalLLaMA / 4/20/2026

💬 OpinionSignals & Early TrendsTools & Practical Usage

Key Points

  • The post asks for guidance on which Qwen 3.6 quantized model variant to run on an M5 Pro with 24GB RAM, specifically when using Qwen 3.6 via Ollama.
  • The author is deciding between Q4 and Q3 options but reports difficulty finding a “good” Q3 solution.
  • The main goal is practical model selection to fit memory constraints while still working well on the user’s hardware.
  • The request is targeted at the local LLM community (LocalLLaMA) for recommendations based on real-world performance and compatibility.

I have m5 pro with 24GB ram setup. I am not sure to run Q4 version. But i couldn’t find the good Q3 solution. Can you recommend one? I want to try qwen 3.6 with ollama.

submitted by /u/utnapistim99
[link] [comments]