Youtuber tries Qwen 3.5 35B, Qwen 3.6 35B, and Gemma 4 27b to reverse engineer some large JS, with good results for Qwen 3.6

Reddit r/LocalLLaMA / 4/22/2026

💬 OpinionTools & Practical UsageModels & Research

Key Points

  • A YouTuber reportedly tested Qwen 3.5 35B, Qwen 3.6 35B, and Gemma 4 27B on the task of reverse-engineering a large JavaScript codebase.
  • The comparison suggests Qwen 3.6 delivers noticeably better results than prior Qwen versions in this kind of coding/inference workload.
  • The post notes that earlier Qwen 3 MoE models were perceived as weak at instruction following and had low “dumb point” behavior in the context window.
  • The author asks whether others have also observed improved instruction-following performance in Qwen 3.6 based on their own experience.
  • Overall, the anecdotal results point to incremental quality gains in Qwen 3.6 that matter for practical reverse-engineering and code understanding tasks.
Youtuber tries Qwen 3.5 35B, Qwen 3.6 35B, and Gemma 4 27b to reverse engineer some large JS, with good results for Qwen 3.6

Found this interesting and thought i'd share.

A big problem i've had with Qwen 3 MoE is how bad at instruction following it was, and also, it's 'dumb point' in the context window was really low. I was so turned off by it that i never tried Qwen 3.5 and kept using SEED OSS 36B for coding.

3.6 appears to have better instruction following than prior models, do you find this to be the case yourself?

submitted by /u/mr_zerolith
[link] [comments]