The data has been slowly building up and points to a very likely economic and rational conclusion : Anthropic is effectively constructively terminating its Max subscription plans with the eventual goal of an enterprise-first (or only) focus, planning to offer only (1) massively higher tiered (i.e., expensive) subscription plans or (2) dramatically stricter plan limits going forward.
The term "constructive termination" is being used in this case because Anthropic appears willing to slowly attrit and lose customers to churn through silent degradation rather than transparently communicate plan, limit, model changes to its customers.
The likely rational economic conclusion is that this is in an attempt to salvage subscription ARR for as long as possible, while making changes that reduce negative margins, ramp up enterprise business, and slow churn through publicly ambiguous responsibility and technical explanations for regressions.
We are likely heading towards an era where liberal access to frontier models will be restricted to large enterprises and impose dramatic cost barriers to usage by individuals and smaller teams. Without very clear and open communication from Anthropic that makes firm commitments around future expectations for individuals and teams using subscriptions to plan around, users should base their future plans around the expectation of having less access to these models than today.
https://github.com/anthropics/claude-code/issues/46829#issuecomment-4233122128
[link] [comments]

