One year later: this question feels a lot less crazy

Reddit r/LocalLLaMA / 4/10/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisTools & Practical Usage

Key Points

  • A Reddit user reflects that a year earlier “Local o3” comparisons between models like Gemma 4 31B and OpenAI o3 seemed implausible, but have since become part of everyday discussion in the local-LLM community.
  • The post attributes the change to rapid progress in local AI, including improved ability to run and compare models on one’s own setup.
  • It emphasizes community learning effects, saying the user gained substantial knowledge from participating and wanted to share that appreciation.
  • Overall, the message frames the past year as a turning point where local model experimentation shifted from “crazy talk” to practical and observable reality.
One year later: this question feels a lot less crazy

"Local o3"

Gemma 4 31b vs OpenAi o3

https://www.reddit.com/r/LocalLLaMA/comments/1hj1dhk/local_o3/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button

Just thought I’d show how cool I was for asking this a year ago 😌. Because of this community, I've learned so much, and I wanted to share that I love being here!

But honestly, even more than that, it’s pretty amazing how far things have come in just one year. Back then this idea was crazy talk. Now we’re comparing models like this and watching local AI get better and better.

And by the way, no shame to anyone who didn’t think it was possible. I didn’t think we’d get here also.

https://preview.redd.it/p2wq6xup58ug1.png?width=669&format=png&auto=webp&s=6d4c879e4f2aee48339f8b2ed2ecc47aa42c60e6

submitted by /u/gamblingapocalypse
[link] [comments]