| these trillion parameter models have never seen such winrates [link] [comments] |
qwen 9b is on another level
Reddit r/LocalLLaMA / 3/16/2026
💬 OpinionSignals & Early TrendsModels & Research
Key Points
- A Reddit post claims that trillion-parameter models have never seen such winrates, highlighting Qwen 9B's performance as evidence.
- The post includes a link to an image purportedly showing evaluation results that support the claim.
- The discussion centers on the performance and implications of very large language models, touching on scale versus efficiency.
- The content is social media chatter rather than an official release or peer-reviewed result, indicating early-trend noise rather than established facts.
Related Articles
Is AI becoming a bubble, and could it end like the dot-com crash?
Reddit r/artificial

I made a 'benchmark' where LLMs write code controlling units in a 1v1 RTS game.
Dev.to

My AI Does Not Have a Clock
Dev.to

From Early Adopter to AI Instructor: Teaching 500 Engineers to Build with LLMs
Dev.to
How to settle on a coding LLM ? What parameters to watch out for ?
Reddit r/LocalLLaMA