Breaking the Chain: A Causal Analysis of LLM Faithfulness to Intermediate Structures
arXiv cs.AI / 3/18/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The authors present a causal evaluation protocol to determine whether intermediate structures in schema-guided LLM reasoning causally determine final outputs.
- In experiments across eight models and three benchmarks, models are self-consistent with their intermediate structures but often fail to update predictions after interventions, up to 60% of cases, revealing fragility of apparent faithfulness.
- When the final decision is derived from an external tool, this fragility largely disappears, suggesting the structure can influence but not reliably mediate the outcome.
- Prompts that emphasize the intermediate structure over the original input do not materially close the gap, indicating intermediate structures act as influential context rather than stable causal mediators.
Related Articles

Astral to Join OpenAI
Dev.to

PearlOS. We gave swarm intelligence a local desktop environment and code control to self-evolve. Has been pretty incredible to see so far. Open source and free if you want your own.
Reddit r/LocalLLaMA

Why Data is Important for LLM
Dev.to

The Inference Market Is Consolidating. Agent Payments Are Still Nobody's Problem.
Dev.to

YouTube's Deepfake Shield for Politicians Changes Evidence Forever
Dev.to