Found references to "models/gemma-4" hiding in AI Studio's code. Release imminent? 👀

Reddit r/LocalLLaMA / 4/1/2026

💬 OpinionDeveloper Stack & InfrastructureSignals & Early TrendsTools & Practical Usage

Key Points

  • Reddit post claims that references to “models/gemma-4” were found embedded in AI Studio’s codebase, suggesting a possible upcoming release.
  • The post notes a Kaggle link for “google/gemma-4,” aligning with the idea that Gemma 4 artifacts may already be staged for public availability.
  • It also reports two Gemma model variants (“Significant-Otter” and “Pteronura”) being tested on LMArena, with described performance differences across vision/coding and reasoning stability.
  • Reported benchmark observations suggest Pteronura may be a smaller dense model (estimated around 27B) while Significant-Otter may be a much larger (estimated ~120B) model that is less consistently reliable.
  • Overall, the item functions as an early “signal” from code/asset references rather than an official announcement or confirmed launch date.
Found references to "models/gemma-4" hiding in AI Studio's code. Release imminent? 👀

https://preview.redd.it/dluo2rk7yisg1.png?width=550&format=png&auto=webp&s=dc257ec3f280a11025032af59aba0d54da20e030

https://www.kaggle.com/models/google/gemma-4 there is kaggle link too

https://preview.redd.it/l1hmjfbayisg1.png?width=530&format=png&auto=webp&s=28300f4a0b18f844740ea46144201a92f3a42c9c

⚡ Two Gemma models: Significant-Otter and Pteronura are being tested on LMArena and are quite strong for vision and coding. Pteronura seems to be a dense model (likely 27B) with factual knowledge below Flash 3.1 Lite but reasoning close to 3.1 Flash. Meanwhile, Significant-Otter seems to be the 120B model, which has good factual accuracy but is unstable, sometimes showing good reasoning, and sometimes performing way worse than Pteronura.

submitted by /u/Sadman782
[link] [comments]