AMD’s AI director just analyzed 6,852 Claude Code sessions, 234,760 tool calls, and 17,871 thinking blocks.
Her conclusion: “Claude cannot be trusted to perform complex engineering tasks.”
Thinking depth dropped 67%. Code reads before edits fell from 6.6 to 2.0. The model started editing files it hadn’t even read.
Stop-hook violations went from zero to 10 per day.
Anthropic admitted they silently changed the default effort level from “high” to “medium” and introduced “adaptive thinking” that lets the model decide how much to reason.
No announcement. No warning.
When users shared transcripts, Anthropic’s own engineer confirmed the model was allocating ZERO thinking tokens on some turns.
The turns with zero reasoning? Those were the ones hallucinating.
AMD’s team has already switched to another provider.
But here’s what most people are missing.
This isn’t just a Claude story.
AMD had 50+ concurrent sessions running on one tool.
Their entire AI compiler workflow was built around Claude Code. One silent update broke everything.
That’s vendor lock-in. And it will keep happening.
→ Every AI company will optimize for their margins, not your workflow
→ Today’s best model is tomorrow’s second choice
→ If your workflow can’t survive a provider switch, you don’t have a workflow. You have a dependency
The fix is simple: stay multi-model.
→ Use tools like Perplexity that let you swap between Claude, GPT, Gemini in one interface
→ Learn prompt engineering that works across models, not tricks tied to one
→ Test alternatives monthly because the rankings shift fast
Laurenzo said it herself: “6 months ago, Claude stood alone. Anthropic is far from alone at the capability tier Opus previously occupied.”
[link] [comments]




