My thought on Qwen and Gemma

Reddit r/LocalLLaMA / 4/17/2026

💬 OpinionIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The author praises recent major model releases from Qwen and Gemma and describes them as strong “local LLM” options with different strengths.
  • In the author’s STEM-focused and coding use cases, Qwen is viewed as more capable for coding (especially with Qwen 3.6) and has a more structured, logic-heavy approach.
  • For non-English performance, the author finds both models capable but notes Qwen may degrade when conversation is not in English, while Gemma holds up better across languages.
  • The author says Gemma is often more suitable overall for their use because it is more flexible in thinking (though sometimes “fuzzy”), while Qwen seems stronger for image recognition and Gemma lags in coding and certain tool-use optimizations.
  • They emphasize that both models exhibit bias and occasional hallucinations, and they recommend maintaining an active human review/brain to mitigate errors despite the benefits.

This spring is really hot since the localLLM giant, both Qwen and Gemma released major models.
I'm really excited with those release and happy with their capability.
Both are real hero for local LLM, although I have feeling they have different strength.
For the background, I use them with text review, grammar check in human/social science field and some coding with python(mostly light data analysis stuff), web app(js, ts), general stuff.
I use 27/31B dense and 35/26B Moe, haven't much tried with smaller models.

Qwen
Strength

  • Thought/knowledge and way/paradigm how it deals in STEM area.
  • Coding. It was already better, but with 3.6, coding is much much superior than Gemma.

Weakness

  • Non english language. I feel it got dumm when text/conversation is not in english. guess in chinese it does well, but since I can't chinese, no clue.
  • I feel sometimes it tend to too much "logical" or "hard head" for my area.

Gemma

Strength

  • Flexible on way of thinking, but it is also sometimes "fuzzy". But for my use, it is often suited than Qwen.
  • Non English language. unlike Qwen, it doesn't degrade in other language.

Weakness

  • Coding. 4 is much better than 3. but still way behind than Qwen.
  • Image. Qwen is better for image recognition.
  • Tool use. I guess it is not the problem of model itself, but I feel it still lucks optimise of engine. Model architect too complicated? I have no idea.

Bias

Both has bias in different way/direction, especially politics/cultural topic. Since I believe real "neutral" model is impossible in general, I would always keep it in my mind. But I feel Qwen got more toward to neutral since 3.5(before it was much biased in my opinion), similar neutrality to Gemma.

They still hallucinate occasionally and sometimes dumm, but I think it is also good for me since I still need to use my brain/hand to cover it to avoid got Alzheimer.

Both are open weight, I continue use them by case.
My usage is not so much heavy, so I may miss something and this is just my opinion/feelings.
What is your thought? I'm curious.

submitted by /u/Internal-Thanks8812
[link] [comments]