It's been a busy week testing and trying to get the 27B model set up correctly.
TL;DR: The only setup that worked for my dual 3090s was this one. And the funny thing? I just gave the Qwen27B LlamaCPP setup link to the Pi agent and asked it to handle it. It basically set itself up in one
shot.
I then compared 37B-Q8 vs. Qwen3.6-27B-AutoRound-Q4, and I was amazed — the AutoRound version was both faster and smarter.
Here's what I tested:
1. Modem access script — I gave it the IP and password. After about 20 minutes, success.
2. Bug hunting — I asked it to find issues in a complex project I'm working on. It found real bugs, and GPT-4.5 Turbo confirmed them. My take: Qwen seems better at digging deep. Cloud models might be prompted
to save tokens, so they don't go as far (just my two cents).
3. Android app — This was tough, but it was done one shot and worked as expected.
I'm now using it on another project. It reminds me of GLM-4.5 — sometimes I need to prompt it 2–3 times to get it to change or fix something.
Edit: This is the tutorial I've followed tutorial
[link] [comments]




