Comparison Qwen 3.6 35B MoE vs Qwen 3.5 35B MoE on Research Paper to WebApp

Reddit r/LocalLLaMA / 4/17/2026

💬 OpinionSignals & Early TrendsTools & Practical UsageModels & Research

Key Points

  • The post compares Qwen3.5 35B MoE vs Qwen3.6 35B MoE when converted for “Research Paper to WebApp,” using the same local setup and quantization (unsloth 4K_XL GGUF) with reasoning turned off.
  • The author reports using the existing “research-webapp-skill” created for Qwen3.5 and runs both models via llama.cpp/llama-server to keep conditions as consistent as possible.
  • Results are presented in a before/after format (Qwen3.5 on the left, Qwen3.6 on the right), but the author cautions that the comparison is preliminary and more experiments are needed.
  • The shared command includes specific inference parameters (context length, batching, temperature/top-p/top-k, and penalties), indicating an attempt to control for generation behavior.
  • Overall, it’s a practical, user-driven evaluation signal for developers deciding whether upgrading from Qwen 3.5 to 3.6 improves research-to-webapp task performance.
Comparison Qwen 3.6 35B MoE vs Qwen 3.5 35B MoE on Research Paper to WebApp

Note: First is Qwen3.5 35B MoE (Left) and Second is Qwen3.6 (Right)

Hi Guys

Just did quick comparison of Qwen3.6 35B MoE against Qwen 3.5 35B MoE. with reasoning off using llama.cpp and same quant unsloth 4 K_XL GGUF

First is Qwen3.5 outcome and second is Qwen3.6

Leaving with you all to judge. I have to do more experiments before concluding anything.

I have used same skills that I created using qwen3.5 35B before.
statisticalplumber/research-webapp-skill

u/echo off title Llama Server :: Set the model path set MODEL_PATH=C:\Users\Xyane\.lmstudio\models\unsloth\Qwen3.6-35B-A3B-GGUF\Qwen3.6-35B-A3B-UD-Q4_K_XL.gguf echo Starting Llama Server... echo Model: %MODEL_PATH% llama-server.exe -m "%MODEL_PATH%" --chat-template-kwargs "{\"enable_thinking\": false}" --jinja -fit on -c 90000 -b 4096 -ub 1024 --reasoning off --presence-penalty 1.5 --repeat-penalty 1.0 --temp 0.6 --top-p 0.95 --min-p 0.0 --top-k 20 --keep 1024 -np 1 if %ERRORLEVEL% NEQ 0 ( echo. echo [ERROR] Llama server exited with error code %ERRORLEVEL% pause ) 
submitted by /u/dreamai87
[link] [comments]