Qwen Team Open-Sources Qwen3.6-35B-A3B: A Sparse MoE Vision-Language Model with 3B Active Parameters and Agentic Coding Capabilities

MarkTechPost / 4/17/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • Qwen Team has released an open-source Qwen3.6-35B-A3B vision-language model based on a sparse Mixture-of-Experts (MoE) architecture.
  • The model is designed to activate about 3B parameters during inference, reducing compute compared with dense models at similar scale.
  • It targets multi-modal (vision + language) capabilities while emphasizing “agentic coding” functionality.
  • The release provides the community with access to a new VLM checkpoint and associated artifacts for experimentation and downstream development.

Qwen Team Open-Sources Qwen3.6-35B-A3B: A Sparse MoE Vision-Language Model with 3B Active Parameters and Agentic Coding Capabilities

The post Qwen Team Open-Sources Qwen3.6-35B-A3B: A Sparse MoE Vision-Language Model with 3B Active Parameters and Agentic Coding Capabilities appeared first on MarkTechPost.