Qwen3.6. This is it.

Reddit r/LocalLLaMA / 4/17/2026

💬 OpinionDeveloper Stack & InfrastructureSignals & Early TrendsTools & Practical Usage

Key Points

  • A Reddit user reports running Qwen 3.6 to autonomously build and test a tower defense game via MCP-installed tools, and claims the agent is actually executing the development workflow.
  • The user describes the system testing an “upgrade” feature, detecting rendering issues in a canvas, and then fixing them during its own run.
  • They also claim the model identified and addressed a bug related to wave completion, with the agent continuing through debugging/testing steps.
  • The post includes a llama.cpp server command/config for launching Qwen 3.6 with a specific GGUF model and various runtime parameters, suggesting a local/self-hosted setup.
  • The author adds an edit noting that the “open code” still had a 27B model alias, and shares they quickly tested locally and posted their results right away due to excitement.
Qwen3.6. This is it.

https://preview.redd.it/nxn2rr15vqvg1.png?width=1920&format=png&auto=webp&s=8ec85d90b1286a6e7813c91a0a83c748e94ca849

I gave it a task to build a tower defense game. use screenshots from the installed mcp to confirm your build.

My God its actually doing it, Its now testing the upgrade feature,
It noted the canvas wasnt rendering at some point and saw and fixed it.
It noted its own bug in wave completions and is actually doing it...

I am blown away...
I cant image what the Qwen Coder thats following will be able to do.
What a time were in.

llama-server -m "{PATH_TO_MODEL}\Qwen3.6\Qwen3.6-35B-A3B-UD-Q6_K_XL.gguf" --mmproj "{PATH_TO_MODEL}\Qwen3.6\mmproj-F16.gguf" --chat-template-file "{PATH_TO_MODEL}\chat_template\chat_template.jinja" -a "Qwen3.5-27B" --cpu-moe -c 120384 --host 0.0.0.0 --port 8084 --reasoning-budget -1 --top-k 20 --top-p 0.95 --min-p 0 --repeat-penalty 1.0 --presence-penalty 1.5 -fa on --temp 0.7 --no-mmap --no-mmproj-offload --ctx-checkpoints 5" 

EDIT: Its been made aware that open code still has my 27B model alias,
Im lazy, i didnt even bother the model name heres my llama.cpp server configs, im so excited i tested and came here right away.

submitted by /u/Local-Cardiologist-5
[link] [comments]