What's the smallest (most capable) model you've found?

Reddit r/LocalLLaMA / 4/16/2026

💬 OpinionSignals & Early TrendsModels & Research

Key Points

  • The post asks community members what the smallest yet most capable English-capable models they’ve found are, particularly for one-way/basic conversation use cases.
  • The author reports testing TinyStories (sub-100M parameters) in the browser and says it performs adequately but “falls apart” quickly.
  • They also mention Bonsai at 1.7B parameters (sub-300M) as a more promising option and suggest it could enable public-domain deployment with user opt-in.
  • The overall goal is to discover lightweight models that can still maintain basic conversational ability at small sizes suitable for local or browser-based runs.

I found TinyStories (which is sub 100m) to run in the browser. It's alright, but falls apart quite easily. Now with Bonsai 1.7b (sub 300m), I have some hope to maybe run something on a public domain with user opt-in.

Anyone found anything else that's capable of basic English language? More of a one way conversation.

Anything come to mind?

submitted by /u/howtheydoingit
[link] [comments]