Hey y'all!
I've recently written a text in Russian about my experience comparing Qwen-3.6-27B with lower tier cloud models on hard tasks -- I wanted to share the translation of the post, since I found the results interesting and surprising. It might break Rule 3, since it's evaluation of LLM written code, but whatever, my methodology is handcrafted and results are still non-trivial. Sorry for the translation, my English is not that good.
__
I once had a server with a 3090 and a Xeon from AliExpress, and I used to run local models on it. This was back in those wonderful times when all interaction with LLMs happened through a web UI, agents were only just starting to appear, and if you wanted to write code properly, you had to copy it from the chat into a file and back again. Back then, I ran Mixtral 8x7B locally, partially offloaded into RAM, and I was extremely pleased with it. Generation speed was around 8 tokens per second, which was perfectly enough for casual chat with instant models, and Mixtral successfully wrote essays for me for Entrepreneurship & Innovation courses in my university. I tried using it for code generation too, or rather for Ansible configs, and predictably got chewed out by my teamlead, for stupid mistakes. Fun times.
Now Qwen-3.6-27B and Qwen-3.6-35B-A3B are out: two small models specifically tuned for coding and agentic tasks and aimed at local inference. To run them in full precision, that is, in FP8 — they were natively trained in it — you need around 36/40 GB of VRAM. But we are not proud people and are happy to compromise, so we can take GGUFs in q4_k_m or even q3_k_s to make them fit into local hardware.
I became curious about how capable local models really are at vibe coding. Obviously, they will not replace Opus or Sonnet, so as a satisfactory target I picked a sub-frontier model from a frontier lab: GPT-Codex-Spark. It has a 262k context window, it is not as smart as full Codex or GPT-5.2/5.4/5.5, but it is perfectly capable of calling tools, writing code, and so on. As an approximation of a local model, it works well enough — with the difference that it is super fast and costs $100 per month, while a local model will be super slow and free, or rather, will cost whatever electricity my gaming PC consumes. I also took Claude Haiku 4.5 to see what Anthropic has to offer.
For local inference hardware, I used a system with a Ryzen 7 7800X3D, 64 GB DDR5-6400, and an RTX 5080 with 16 GB VRAM. To make the task realistically difficult, I took a fairly complex work project — implementing an autoresearch loop from a relatively detailed design document* — and prompted Qwen-3.6-27B-q4_k_m, Qwen-3.6-27B via OpenRouter, Gemma-4-31B via OpenRouter, Claude Haiku 4.5 in Pi Agent, and Codex-Spark in Codex to implement it using my AGENTS.md. The OpenRouter models were included to estimate, first, how expensive it would be to use these models via API, and second, to estimate the upper bound of their capabilities — not crippled quantized inference on my hardware, but full precision.
Importantly, I deliberately chose a task that was too hard for these models. I did not expect even one of them to solve it cleanly. In principle, this is a common problem with local-model evals: people prompt them with tasks that are too simple, and then you get headlines like “My locally hosted Qwen matched Claude Opus in performance!” — both models wrote Snake in HTML, wow. In my case, the goal was not “solve the task,” but “mess up as little as possible while attempting to solve it.” So we will evaluate the applicability of these models not by whether they solved the task — only one out of four did — but by the cleanliness of the failure and the number of remaining fixes needed to match the spec. I evaluated the implementations with Claude Code, using Claude Opus 4.7, xhigh. It wrote the design document and was able to implement a clean solution itself, (at least, according to GPT-5.5's review), so let us trust that it is a good judge.
Results:
- Gemma-4-31B failed completely. It wrote a skeleton solution, but mocked half of the modules and made several mistakes in the implementation. No tests, no __init__.py, no requirements.txt or pyproject.toml, and the docs basically say “just install NumPy and you’ll be fine.” Cost: $0.112, 803k context tokens consumed, 21k tokens generated.
- Codex-Spark high produced a very beautiful implementation, very quickly — pity it does not work. All the files are neatly arranged into folders, but the imports are wrong. The model hallucinated methods for its own code, did not write unit tests, and did everything in two commits: all code plus documentation. I do not know how much money was spent; as far as I understand, Spark has no API. It used 1% of the Spark limits from the $100 subscription.
- Claude Haiku wrote very detailed docs and a README, created several Git branches (!), but did not write tests, leaks test into train, computes metrics incorrectly, and does not provide the necessary samples to the proposer. The code has many TODOs, no exception handling, and the entire loop will crash on a single error. It read 246k tokens, wrote 78k tokens, and cost $1.067 — the most expensive model of the tested ones.
- Qwen-3.6-27B-q4_k_m got it almost right, but there is a train-to-test leak in the code. It is a one-line fix, but still an error. In addition, there are no tests, no retries for LLM requests — though there is a TODO — and OPS.md does not describe common errors, how to fix them, the update guide, and so on. It read 39k tokens and wrote 45k tokens. It ran for almost the entire workday, around 8 hours — unsurprisingly, since I partially offloaded the model into RAM and got 10 TPS with an empty context and 1–2 TPS near the end of the solution. This is exactly why I did not even try to run Gemma-4-31B locally, especially given its outdated architecture and KV caches that are, compared to Qwen, prohibitively heavy.
- Qwen-3.6-27B in full quality via OpenRouter unexpectedly solved the task almost completely. The most serious issue is that instead of hashing a mutable object, it uses a substring from it, meaning we will not be able to track changes. But the autoresearch loop is fully working. There are tests, docs, commits — no branches, true, but who cares, they are not necessary here — a README, and so on. The reason is probably simple: the model ran the tests it wrote, so it caught all the errors that appeared in the other implementations. It consumed 4.4M tokens (!) and wrote 58k tokens. The run cost $0.939, which was surprisingly expensive -- the model costs $2 (!!!) per million tokens.
If we evaluate the solutions through the lens of “given competent feedback, which weak agent would be easiest to finish the job with?”, both Qwens win decisively. Full-quality Qwen has tests and can be fixed with two one-liners. Quantized Qwen can be fixed with one one-liner (and writing tests lol). Everything else is much less trivial to repair. Codex was especially disappointing: despite beautiful and clean architecture, the code does not import and is not covered by tests. A weak model, even with good feedback, will try to fix it and then say “I did everything, trust me bro” without actual confirmation that the fix worked.
So, conclusions: can a local model replace a $20, $100, or $200 subscription? Of course not. More than that, my small test is not representative at all — in real work, you have to navigate a large existing repository, not one-shot projects from a design document.
But I would still start thinking about a second GPU so that Qwen fits into VRAM and inference becomes faster. APIs are becoming more expensive, models generate more tokens, subscriptions are getting restricted — I am confident that in six months, a $20 plan will no longer allow anyone to vibe code properly, while $100 or $200 plans will either be cut down by limits to the level of Codex from the $20 plan a month ago, or strangled through KYC. Qwen, meanwhile, runs on my gaming (!) PC, writes code — slowly and with mistakes, but still writes it — and is perfectly capable of replacing lower-tier proprietary models. If I add something like a 3060, which costs about one and a half to two months of a $200 Claude subscription, to my setup, I will be able to run Qwen in Q6_K_M fully in VRAM. It will be fast, it will probably match the performance of the uncompressed Qwen from OpenRouter and compared to 200$ per-month toll it has a reasonable ROI.
I am confident that in six months the models will be updated, but the situation will remain roughly the same: Qwen-4 will handle vibe coding at the level of, or even better than, Claude Haiku 5 — that is, at the level of the current Sonnet 4.6 / Opus 4.5. This means that with occasional and relatively cheap reviews from a large, competent model through API, we will be able to fully get rid of the OpenAI/Anthropic/Google subscriptions. And that warms my soul.
Review document for the implementations by Claude:
https://github.com/chameleon-lizard/autoresearch_qwen_27b_q4_k_m/blob/main/autoresearch_review.md
Implementations repositories:
autoresearch_haiku:
https://github.com/chameleon-lizard/autoresearch_haiku
autoresearch_qwen_27b_q4_k_m:
https://github.com/chameleon-lizard/autoresearch_qwen_27b_q4_k_m
autoresearch_qwen_27b_openrouter:
https://github.com/chameleon-lizard/autoresearch_qwen_27b_openrouter
autoresearch_gemma_4_31b_openrouter:
https://github.com/chameleon-lizard/autoresearch_gemma_4_31b_openrouter
autoresearch_codex_spark:
https://github.com/chameleon-lizard/autoresearch_codex_spark
[link] [comments]


