| Note: First is Qwen3.5 35B MoE (Left) and Second is Qwen3.6 (Right) Hi Guys Just did quick comparison of Qwen3.6 35B MoE against Qwen 3.5 35B MoE. with reasoning off using llama.cpp and same quant unsloth 4 K_XL GGUF First is Qwen3.5 outcome and second is Qwen3.6 Leaving with you all to judge. I have to do more experiments before concluding anything. I have used same skills that I created using qwen3.5 35B before. [link] [comments] |
Comparison Qwen 3.6 35B MoE vs Qwen 3.5 35B MoE on Research Paper to WebApp
Reddit r/LocalLLaMA / 4/17/2026
💬 OpinionSignals & Early TrendsTools & Practical UsageModels & Research
Key Points
- The post compares Qwen3.5 35B MoE vs Qwen3.6 35B MoE when converted for “Research Paper to WebApp,” using the same local setup and quantization (unsloth 4K_XL GGUF) with reasoning turned off.
- The author reports using the existing “research-webapp-skill” created for Qwen3.5 and runs both models via llama.cpp/llama-server to keep conditions as consistent as possible.
- Results are presented in a before/after format (Qwen3.5 on the left, Qwen3.6 on the right), but the author cautions that the comparison is preliminary and more experiments are needed.
- The shared command includes specific inference parameters (context length, batching, temperature/top-p/top-k, and penalties), indicating an attempt to control for generation behavior.
- Overall, it’s a practical, user-driven evaluation signal for developers deciding whether upgrading from Qwen 3.5 to 3.6 improves research-to-webapp task performance.
Related Articles

Black Hat USA
AI Business

Black Hat Asia
AI Business
The AI Hype Cycle Is Lying to You About What to Learn
Dev.to
Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to
OpenAI Codex April 2026 Update Review: Computer Use, Memory & 90+ Plugins — Is the Hype Real?
Dev.to