Alibaba Qwen Team Releases Qwen3.6-27B: A Dense Open-Weight Model Outperforming 397B MoE on Agentic Coding Benchmarks

MarkTechPost / 4/23/2026

📰 NewsSignals & Early TrendsIndustry & Market MovesModels & Research

Key Points

  • Alibaba’s Qwen team released Qwen3.6-27B, the first dense open-weight model in the Qwen3.6 family, positioned as highly capable for coding agents.
  • The model reportedly improves agentic coding performance on benchmarks and is claimed to outperform a 397B MoE model in those evaluations.
  • Qwen3.6-27B introduces a “Thinking Preservation” mechanism intended to better retain intermediate reasoning.
  • It uses a hybrid architecture combining Gated DeltaNet linear attention with conventional self-attention to balance capability and efficiency.

Alibaba’s Qwen Team has released Qwen3.6-27B, the first dense open-weight model in the Qwen3.6 family — and arguably the most capable 27-billion-parameter model available today for coding agents. It brings substantial improvements in agentic coding, a novel Thinking Preservation mechanism, and a hybrid architecture that blends Gated DeltaNet linear attention with traditional self-attention — all […]

The post Alibaba Qwen Team Releases Qwen3.6-27B: A Dense Open-Weight Model Outperforming 397B MoE on Agentic Coding Benchmarks appeared first on MarkTechPost.