Opus 4.7 is terrible, and Anthropic has completely dropped the ball

Reddit r/artificial / 4/17/2026

💬 OpinionIdeas & Deep AnalysisTools & Practical UsageIndustry & Market Moves

Key Points

  • The author says they switched from ChatGPT to Anthropic’s Claude after issues with DoW and criticized “antics” from Sam Altman, initially finding Claude Opus 4.6 the best tool for synthesizing complex logic in theoretical math and physics research.
  • They report that Claude Opus 4.7 has been unreliable in recent weeks, including frequent service downtime and repeated “spiraling” attempts when solving detailed proofs.
  • The author argues that workarounds—such as explicitly asking the model to think before answering—should not be necessary for a top-tier $20/month model.
  • They claim the model’s mid-response self-correction wastes time, often fails to produce usable results, and can quickly cause them to hit usage limits.
  • As a PhD student, they say the declining reliability makes it hard to justify paying out-of-pocket and are considering switching back to GPT despite preferring not to for personal reasons.

Tried posting this in r/ClaudeAI but it got auto-removed, and I was told to post it in the "Bugs Megathread." Don't really think it should been removed, but whatever, I'll just post it here since I'm sure it's still relevant.

Like a lot of people, I switched from ChatGPT to Claude not too long ago during the whole DoW fiasco and Sam Altman “antics.” At first, I was genuinely impressed. I do fairly heavy theoretical math and physics research, and Opus 4.6 was simply the best tool I’d used for synthesizing ideas and working through complex logic. But the last few weeks have been really disappointing, and I’m seriously considering going back to GPT (even though, for personal reasons, I’d really rather not).

How many times has Claude been down recently? And why is it that I can ask Claude 4.7 (with adaptive thinking turned on) to work through a detailed proof, and it just spirals “oh wait, that doesn’t work, let me try again” five times in a single response? Yes, there’s a workaround to explicitly tell it to think before answering. But… why is that necessary? I’m paying $20/month. This is supposed to be a top-tier model. Instead, it burns through time, second-guesses itself mid-response, and often fails to land anywhere useful on problems I’m fairly sure 4.6 would have handled more coherently a month ago. And then before I know it I hit the usage limit.

I’m a PhD student. I can’t justify spending $100-$200/month on higher tiers. $20 has always been enough for me, and I’ve come to rely on these tools for my research. I expected to stick with Claude long-term, but the recent instability and drop in reliability make it hard to justify paying for it out of pocket.

It’s frustrating to feel pushed toward a competitor because of this. But at a certain point, the usability of the product has to come first. Really disappointing.

submitted by /u/JulioMcLaughlin2
[link] [comments]