SOTA Language Models Under 14B?

Reddit r/LocalLLaMA / 4/2/2026

💬 OpinionSignals & Early TrendsTools & Practical UsageModels & Research

Key Points

  • The post asks for the most capable recent small (under 14B parameters) state-of-the-art language models for broad question-answering across diverse topics, including math.
  • It seeks community-reported experiences, including which specific models work well or poorly for general QA performance.
  • The request implies a comparison among lightweight LLMs with an emphasis on practical effectiveness rather than theoretical properties.
  • While no specific models or results are provided in the snippet, the intent is to surface recommendations and evaluation insights for small-model deployment.

Hey guys,

I was wondering what recent state-of-the-art small language models are the best for general question-answering task (diverse topics including math)?

Any good/bad experience with specific models?

Thank you!

submitted by /u/No-Mud-1902
[link] [comments]