Unsloth/Qwen3.6-35b-a3b -> Q5_K_S vs Q4_K_XL

Reddit r/LocalLLaMA / 4/19/2026

💬 OpinionSignals & Early TrendsTools & Practical Usage

Key Points

  • A user reports testing the Qwen3.6-35b-a3b model in Unsloth using recommended settings and comparing quantization variants Q5_K_S vs Q4_K_XL.
  • They claim Q4_K_XL performs substantially better for their tasks, including web research, document research, transcripts, Python/HTML coding, and debugging.
  • The user specifically highlights improved results for web search when using Q4_K_XL compared with Q5_K_S.
  • They speculate that the Q4 model has stronger reasoning capabilities, and ask whether others have observed similar differences.
  • The post is based on personal experimentation rather than controlled benchmarks, inviting community feedback.

I run both from unsloth with recommended settings, and what I found is that Q4_K_XL does a LOT better job in my use case - web research, document research, transcript, python and html coding and code debugging
Especially in websearch
It looks to me that reasoning is a lot stronger in Q4 model
Has anybody else noticed that?

submitted by /u/KringleKrispi
[link] [comments]