A growing number of developers and AI power users are taking to social media to accuse Anthropic of degrading the performance of Claude Opus 4.6 and Claude Code — intentionally or as an outcome of compute limits — arguing that the company’s flagship coding model feels less capable, less reliable and more wasteful with tokens than it did just weeks ago.
The complaints have spread quickly on Github, X and Reddit over the past several weeks, with several high-reach posts alleging that Claude has become worse at sustained reasoning, more likely to abandon tasks midway through, and more prone to hallucinations or contradictions.
Some users have framed the issue as “AI shrinkflation” — the idea that customers are paying the same price for a weaker product.
Others have gone further, suggesting Anthropic may be throttling or otherwise tuning Claude downward during periods of heavy demand.
Those claims remain unproven, and Anthropic employees have publicly denied that the company degrades models to manage capacity. At the same time, Anthropic has acknowledged real changes to usage limits and reasoning defaults in recent weeks, which has made the broader debate more combustible.
VentureBeat has reached out to Anthropic for further clarification on the recent accusations, including whether any recent changes to reasoning defaults, context handling, throttling behavior, inference parameters or benchmark methodology could help explain the spike in complaints.
Anthropic lost maybe as much as $6 billion last year, and is expected to lose another $8-10 billion this year. Revenue is growing, but infrastructure costs are growing even faster.
Maybe something had to give.