Anyone compared Gemma 4 31B

Reddit r/artificial / 4/10/2026

💬 OpinionSignals & Early TrendsModels & Research

Key Points

  • The post discusses widespread claims that the Gemma 4 31B model performs exceptionally well, particularly for coding and everyday tasks.
  • It notes that, relative to much larger models such as the rumored Sonnet ~1.5T, Gemma 4 31B is comparatively small.
  • The author questions whether anyone has conducted direct comparisons or evaluations of Gemma 4 31B versus other models.
  • Overall, it functions as a request for comparative benchmarks and practical experiences rather than presenting new results.

I have been seeing a lot of people claiming how good Gemma 4 31B model is.

I know when compared to the size of models like sonnet which is guessed to be a 1.5T model, the size of Gemma 31b is very small.

but people keep claiming Gemma is soo good for coding and day to day tasks.

submitted by /u/Infinite-pheonix
[link] [comments]