Qwen 3.6 35 UD 2 K_XL is pulling beyond its weight and quantization (No one is GPU Poor now)

Reddit r/LocalLLaMA / 4/17/2026

💬 OpinionSignals & Early TrendsTools & Practical Usage

Key Points

  • A Reddit user reports testing the Qwen 3.6 UD 2 K_XL (Qwen 35B) Unsloth model on a paper-to-web-app task and says it performs very well.
  • They claim the model handled 58 tool calls with a 98.3% success rate and correctly managed large context using llama.cpp on a laptop with 16GB VRAM.
  • The user states the model processed about 2.7 million tokens while building the app from the provided paper.
  • They share a suggested workflow/commands to run the model via llama-server (e.g., with a 90,000 context length setting) and provide a link to a related “research-webapp-skill.”
Qwen 3.6 35 UD 2 K_XL is pulling beyond its weight and quantization (No one is GPU Poor now)

Hi guys,

Back again. I have tested the Qwen 3.6 UD 2 K_XL Unsloth model on the same paper to web app task. The model is performing very well. It handled all tool calls properly and also managed large context using llama.cpp on a 16GB VRAM on laptop.

I have attached all details
total tool calls were 58,
with a success rate of 98.3%.
The model also processed around 2.7 million tokens while building the app from the given paper.

You can test this model using the same skills I created earlier with the Qwen 35B model
statisticalplumber/research-webapp-skill

u/echo off title Llama Server - Gemma 4 :: Set the model path set MODEL_PATH=C:\Users\test\.lmstudio\models\unsloth\Qwen3.6-35B-A3B-GGUF\Qwen3.6-35B-A3B-UD-Q2_K_XL.gguf echo Starting Llama Server... echo Model: %MODEL_PATH% llama-server.exe -m "%MODEL_PATH%" --chat-template-kwargs "{\"enable_thinking\": false}" --jinja -fit on -c 90000 -b 4096 -ub 1024 --reasoning off --presence-penalty 1.5 --repeat-penalty 1.0 --temp 0.6 --top-p 0.95 --min-p 0.0 --top-k 20 --context-shift --keep 1024 -np 1 if %ERRORLEVEL% NEQ 0 ( echo. echo [ERROR] Llama server exited with error code %ERRORLEVEL% pause ) 
submitted by /u/dreamai87
[link] [comments]