For chat and Q&A: Which MoE model is better: Qwen 3.6 35B or Gemma 4 26B (no coding or agents)

Reddit r/LocalLLaMA / 4/19/2026

💬 OpinionTools & Practical UsageModels & Research

Key Points

  • The post asks users to compare two specific Mixture-of-Experts (MoE) language models for chat and Q&A use cases.
  • The models being compared are Qwen 3.6 35B and Gemma 4 26B, with the stated constraint of no coding or agent-style workflows.
  • The discussion is framed as a practical “which is better” recommendation request rather than an official benchmark report.
  • The intent is to help readers choose the more suitable model for conversational and question-answering tasks under limited requirements.

Thanks

submitted by /u/br_web
[link] [comments]