We absolutely need Qwen3.6-397B-A17B to be open source

Reddit r/LocalLLaMA / 4/5/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The post argues that Qwen3.6-397B-A17B represents a substantial real-world improvement over Qwen 3.5, especially in reliability and task completion end-to-end.
  • The author claims it performs closer to Claude Sonnet than other “open source” models they have tested in practice, even if some benchmark comparisons might not fully reflect the gap.
  • The article attributes prior failures of comparable models to their tendency to “fall apart” in real usage despite being close on benchmark scores.
  • It advocates for releasing models as open source on grounds that cloud GPU renters and multiple low-cost inference providers can run them, and open access enables modification and reduces censorship constraints.
  • Overall, the post frames open-source availability of capable large models as necessary for users and the ecosystem, not just as a theoretical preference.

The benchmarks may not show it but it's a substantial improvement over 3.5 for real world tasks. This model is performing better than GLM-5.1 and Kimi-k2.5 for me, and the biggest area of improvement has been reliability.

It feels as reliable as claude in getting shit done end to end and not mess up half way and waste hours. This is the first OS model that has actually felt like I can compare it to Claude Sonnet.

We have been comparing OS models with claude sonnet and opus left and right months now, they do show that they are close in benchmarks but fall apart in the real world, the models that are claimed to be close to opus haven't even been able to achieve Sonnet level quality in my real world usage.

This is the first model I can confidently say very closely matches Sonnet.
And before some of you come at me that nobody will be able to run it locally yes, most of us might not be able to run it on our laptops, but

- there are us who rent gpus in the cloud to do things we would never be able to with the closed models

- you get 50 other inference providers hosting the model for dirt cheap prices

- Removing censorship and freedom to use this mode and modify it however you want

- and many other things

Big open source models that are actually decent are necessary.

submitted by /u/True_Requirement_891
[link] [comments]