AI Navigate

Qwen 3.5 397B is the best local coder I have used until now

Reddit r/LocalLLaMA / 3/21/2026

💬 OpinionTools & Practical UsageModels & Research

Key Points

  • The author claims Qwen 3.5 397B outperforms its smaller siblings and competing local coder models in knowledge and bug-freeness.
  • They note that, although it is the slowest option, it requires fewer turns to fix issues and delivers more concise thinking.
  • The post highlights using a 123 GiB IQ2_XS hardware setup, contrasting with others running at IQ4_XS or Q6_K.
  • It concludes that Qwen 3.5 397B is the best local coder they've used, with a link to the Reddit discussion.

Omg, this thing is amazing. I have tried all its smaller silbings 122b/35b/27b, gpt-oss 120b, StepFun 3.5, MiniMax M2.5, Qwen Coder 80B and also the new Super Nemotron 120b. None even come close to the knowledge and the bugfreeness of the big Qwen 3.5.

Ok, it is the slowest of them all but what I am losing in token generation speed I am gaining, by not needing multiple turns to fix its issues, and by not waiting in endless thinking. And yes, in contrast to its smaller silblings or to StepFun 3.5, its thinking is actually very concise.

And the best of it all: Am using quant IQ2_XS from AesSedai. This thing is just 123GiB! All the others I am using at at least IQ4_XS (StepFun 3.5, MiniMax M2.5) or at Q6_K (Qwen 3.5 122b/35b/27b, Qwen Coder 80b, Super Nemotron 120b).

submitted by /u/erazortt
[link] [comments]