Coherence under Constraint

Reddit r/artificial / 4/16/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The author reports small experiments that force LLMs into unresolved contradictions, finding that the key difference is not merely failure but how each model fails under constraint.
  • In the observed pattern, ChatGPT tends to detect contradictions but reacts late and may produce answers that do not fully resolve the inconsistency, while also sometimes avoiding explicit reframing.
  • Gemini is described as detecting contradictions and potentially continuing by producing an answer anyway, with a tendency to reframe rather than refuse, and with stronger “epistemic framing” than ChatGPT.
  • Claude is reported to detect adversarial setups and refocus framing earlier, maintaining high epistemic framing, but is less likely to produce answers after early refusal compared with others.
  • The post invites others to replicate results or relate them to existing research on model coherence, contradiction handling, and safety/refusal behaviors.

I’ve been running some small experiments forcing LLMs into contradictions they can’t resolve.
What surprised me wasn’t that they fail—it’s how differently they fail.

Rough pattern I’m seeing:

Behavior ChatGPT Gemini Claude
Detects contradiction
Refusal timing Late Never Early
Produces answer anyway
Reframes contradiction
Detects adversarial setup
Maintains epistemic framing Medium High Very High

Curious if others have seen similar behavior, or if this lines up with existing work.

submitted by /u/BorgAdjacent
[link] [comments]