| First a little explanation about what is happening in the pictures. I did a small experiment with the aim of determining how much improvement using speculative decoding brings to the speed of the new Qwen (TL;DR big!).
Last image shows finished beautiful aquarium. Aesthetics and functionality is another level compared with the older models of similar size and many much bigger ones. So speed goes 13.60 > 25.53 > 68.35 > 136.75 t/s during session. Every time Qwen delivered full code. Similar kind of workflow I use very often. And all this thanks to one simple line in llama-server command ' I am not sure this is the best setting but it works well for me. I will play with it more. My llama-swap command: My linux PC has 40GB VRAM (rtx3090 and rtx4060ti) and 128GB DDR5 RAM. Big thanks to all smart people who contribute to llamacpp, to this Reddit community and to the Qwen crew. Free lunch, try it out... Edit: I forgot to mention some changes in llama.cpp from two days ago. So try to update. [link] [comments] |
Qwen-3.6-27B, llamacpp, speculative decoding - appreciation post
Reddit r/LocalLLaMA / 4/23/2026
💬 OpinionDeveloper Stack & InfrastructureSignals & Early TrendsTools & Practical Usage
Key Points
- The author describes an experiment comparing generation speed while using Qwen-3.6-27B with llama.cpp, showing large improvements across successive program versions.
- Token generation speed increased from 13.60 t/s to 25.53 t/s, then to 68.35 t/s, and finally to 136.75 t/s within the same session using speculative decoding.
- The post attributes the speed gains to a specific llama-server speculative decoding configuration (ngram speculative decoding with tuned parameters).
- The author also notes a workflow benefit: Qwen successfully detected and helped fix a bug when the user provided a screenshot with a browser console.
- They conclude that, while settings may not be optimal, updating llama.cpp and trying speculative decoding can yield substantial practical speedups on local hardware.
Related Articles

Black Hat USA
AI Business

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Elevating Austria: Google invests in its first data center in the Alps.
Google Blog

10 AI Tools Every Developer Should Try in 2026
Dev.to

Why use an AI gateway at all?
Dev.to