Just saw a post from Peter Steinberger (creator of OpenClaw) saying that it’s likely going to get harder in the future to keep OpenClaw working smoothly with Anthropic/Claude models.
That alone is pretty telling.
At the same time, I’ve also been seeing reports of accounts getting flagged or access revoked due to “suspicious usage signals” — which honestly makes sense if you’re running agents, automation, or heavier workflows.
I personally run OpenClaw with a hybrid setup:
- GPT 5.4 / Codex-style models for execution
- Claude (opus 4.6) as my architect lol.
- testing local models for stability as my overnight work.
I haven’t had any bans or issues yet.
So if the (Peter)himself is saying this…
it feels like a real signal, not just speculation.
My take:
I think part of this is that Anthropic is building out their own AI agent ecosystem internally.
If that’s the case, it would make sense why:
- External agent frameworks get more restricted
- Usage gets flagged more aggressively
- Integrations like OpenClaw become harder to maintain
Not saying that’s 100% what’s happening — but it lines up.
Which is why I’m leaning more toward:
local models + controlled API routing instead of relying too heavily on one provider.
Curious what others are seeing.
Are you still using Claude inside OpenClaw consistently, or already shifting your setup?
[link] [comments]



