Qwen3.6-35B-A3B released!

Reddit r/LocalLLaMA / 4/16/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • Qwen3.6-35B-A3B is released as an open-source sparse MoE model with 35B total parameters and 3B active parameters at runtime.
  • The release claims strong agentic coding performance, comparable to models with about 10x the active size.
  • It highlights robust multimodal perception and reasoning, including support for multimodal thinking with both thinking and non-thinking modes.
  • The model is distributed via Qwen’s blog and interfaces (Qwen Studio), as well as on Hugging Face and ModelScope, making it accessible for local and platform-based adoption.
Qwen3.6-35B-A3B released!

Meet Qwen3.6-35B-A3B:Now Open-Source!🚀🚀

A sparse MoE model, 35B total params, 3B active. Apache 2.0 license.

- Agentic coding on par with models 10x its active size

- Strong multimodal perception and reasoning ability

- Multimodal thinking + non-thinking modes

Efficient. Powerful. Versatile.

Blog:https://qwen.ai/blog?id=qwen3.6-35b-a3b

Qwen Studio:chat.qwen.ai

HuggingFace:https://huggingface.co/Qwen/Qwen3.6-35B-A3B

ModelScope:https://modelscope.cn/models/Qwen/Qwen3.6-35B-A3B

submitted by /u/ResearchCrafty1804
[link] [comments]