Larger Gemma-4/Qwen3.6

Reddit r/LocalLLaMA / 4/30/2026

💬 OpinionSignals & Early TrendsIdeas & Deep Analysis

Key Points

  • The post discusses the strong performance of the Qwen3.5-122B-A10B model at the Q6_K quantization setting.
  • It asks whether a larger Mixture-of-Experts (MoE) variant of Gemma-4 or a Qwen3.6 model will be released in the future.
  • The content is framed as a community speculation/question rather than a confirmed announcement.
  • The discussion is targeted at local/consumer LLM enthusiasts who evaluate models under specific quantization conditions.

Qwen3.5-122B-A10B at Q6_K is really good.

Do you think we will see a larger MoE Gemma-4 or Qwen3.6 at some point?

submitted by /u/Non-Technical
[link] [comments]