| Meet Qwen3.6-35B-A3B:Now Open-Source!🚀🚀 A sparse MoE model, 35B total params, 3B active. Apache 2.0 license. - Agentic coding on par with models 10x its active size - Strong multimodal perception and reasoning ability - Multimodal thinking + non-thinking modes Efficient. Powerful. Versatile. Blog:https://qwen.ai/blog?id=qwen3.6-35b-a3b Qwen Studio:chat.qwen.ai HuggingFace:https://huggingface.co/Qwen/Qwen3.6-35B-A3B ModelScope:https://modelscope.cn/models/Qwen/Qwen3.6-35B-A3B [link] [comments] |
Qwen3.6-35B-A3B released!
Reddit r/LocalLLaMA / 4/16/2026
📰 NewsSignals & Early TrendsModels & Research
Key Points
- Qwen3.6-35B-A3B is released as an open-source sparse MoE model with 35B total parameters and 3B active parameters at runtime.
- The release claims strong agentic coding performance, comparable to models with about 10x the active size.
- It highlights robust multimodal perception and reasoning, including support for multimodal thinking with both thinking and non-thinking modes.
- The model is distributed via Qwen’s blog and interfaces (Qwen Studio), as well as on Hugging Face and ModelScope, making it accessible for local and platform-based adoption.
Related Articles

Black Hat Asia
AI Business

Introducing Claude Opus 4.7
Anthropic News

AI traffic to US retailers rose 393% in Q1, and it’s boosting their revenue too
TechCrunch

The US Government Fired 40% of an Agency, Then Asked AI to Do Their Jobs
Dev.to

🚀 ROSE: Rethinking Computer Vision as a Retrieval-Augmented 🤖 System
Dev.to