Qwen 3.5 122B vs Qwen 3.6 35B - Which to choose?

Reddit r/LocalLLaMA / 4/20/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisTools & Practical Usage

Key Points

  • A Reddit user asks whether Qwen 3.5 122B and Qwen 3.6 35B have been directly compared on Evals and Benchmarks, especially for coding and chat use cases with OpenWebUI.
  • They note that Qwen 3.6 35B is faster due to its smaller size, but they want evidence that it matches or exceeds the 122B model in coding quality and overall index performance.
  • The user highlights that Artificial Analysis reportedly ranks the 35B model higher on coding, agentic use cases, and a general index, raising questions about whether the larger 122B model could still outperform on long tool-calling tasks.
  • The post centers on community “experiences so far,” asking which model better sustains intelligent, long-running tool-calling and maintains higher “IQ” in practice.

Hello guys,
has anybody tested both on Evals and Benchmarks to see the difference?

I am running a DGX Spark 128GB machine and am contemplating which model to choose for Coding (Opencode) and Chat (Openwebui) - of course the speed will be higher with the 35B but has anybody here checked the Quality and Performance on Benchmarks for these two models? what are your experiences?

Artificial Analysis ranks the 35B 3.6 higher than the 122B 3.5 on Coding, on Agentic Use Cases and on the general Index.

Now i am worried that it's gonna perform worse than the 3.6 in terms of long running tool calling tasks. and in terms of its "Intelligence" / IQ. What are your experiences so far?

submitted by /u/Storge2
[link] [comments]