| I do not really like to take X posts as a source, but it's Jeff Dean, maybe there will be more surprises other than what we just got. Thanks, Google! Edit: Seems like Jeff deleted the mention of 124B. Maybe it's because it exceeded Gemini 3 Flash-Lite on benchmark? [link] [comments] |
Will Gemma 4 124B MoE open as well?
Reddit r/LocalLLaMA / 4/3/2026
💬 OpinionSignals & Early TrendsModels & Research
Key Points
- The post speculates that Google’s Jeff Dean may have implied additional Gemma 4 variants beyond what was already released, potentially including a 124B MoE model.
- The author notes that Jeff Dean’s mention of “124B” appears to have been deleted, suggesting the figure may have been removed for some reason.
- A possible explanation raised is that the 124B MoE model could have underperformed relative to Gemini 3 Flash-Lite on benchmarks.
- The discussion is presented as a signal from an X post that may precede further announcements, but it is not treated as a confirmed release by the community.
Related Articles

Black Hat Asia
AI Business

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

WAN 2.1 Text-to-Video: A Developer's Honest Assessment After 6 Weeks of Testing
Dev.to

Cycle 243: 170 Cycles at $0: What I Learned From the Longest Survival Streak in AI Autonomous History
Dev.to

How We Used Claude Code's Leaked Architecture to Transform a 9B Model Into a Production Agent
Dev.to