Can anyone guess how many parameters Claude Opus 4.6 has?

Reddit r/LocalLLaMA / 3/26/2026

💬 OpinionSignals & Early TrendsIdeas & Deep Analysis

Key Points

  • The post discusses the role of parameter count in LLM performance, questioning whether Claude Opus 4.6’s quality comes mainly from being larger.
  • It suggests that scaling laws may eventually break down once most meaningful combinations of learned symbols are covered.
  • The author emphasizes that internal techniques and optimizations could matter as much as sheer parameter size, even when models with hundreds of billions of parameters are already highly expressive.
  • The central prompt asks readers to estimate Claude Opus 4.6’s parameter count, reflecting uncertainty about proprietary model specifications.
There is a finite set of symbols that LLMs can learn from. Of course, the number of possible combinations is enormous, but many of those combinations are not valid or meaningful. Big players claim that scaling laws are still working, but I assume they will eventually stop—at least once most meaningful combinations of our symbols are covered. Models with like 500B parameters can represent a huge number of combinations. So is something like Claude Opus 4.6 good just because it’s bigger, or because of the internal tricks and optimizations they use? 
submitted by /u/More_Chemistry3746
[link] [comments]