| https://www.kaggle.com/models/google/gemma-4 there is kaggle link too ⚡ Two Gemma models: Significant-Otter and Pteronura are being tested on LMArena and are quite strong for vision and coding. Pteronura seems to be a dense model (likely 27B) with factual knowledge below Flash 3.1 Lite but reasoning close to 3.1 Flash. Meanwhile, Significant-Otter seems to be the 120B model, which has good factual accuracy but is unstable, sometimes showing good reasoning, and sometimes performing way worse than Pteronura. [link] [comments] |
Found references to "models/gemma-4" hiding in AI Studio's code. Release imminent? 👀
Reddit r/LocalLLaMA / 4/1/2026
💬 OpinionDeveloper Stack & InfrastructureSignals & Early TrendsTools & Practical Usage
Key Points
- Reddit post claims that references to “models/gemma-4” were found embedded in AI Studio’s codebase, suggesting a possible upcoming release.
- The post notes a Kaggle link for “google/gemma-4,” aligning with the idea that Gemma 4 artifacts may already be staged for public availability.
- It also reports two Gemma model variants (“Significant-Otter” and “Pteronura”) being tested on LMArena, with described performance differences across vision/coding and reasoning stability.
- Reported benchmark observations suggest Pteronura may be a smaller dense model (estimated around 27B) while Significant-Otter may be a much larger (estimated ~120B) model that is less consistently reliable.
- Overall, the item functions as an early “signal” from code/asset references rather than an official announcement or confirmed launch date.




