When Roles Fail: Epistemic Constraints on Advocate Role Fidelity in LLM-Based Political Statement Analysis
arXiv cs.AI / 5/1/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper tests a key assumption behind multi-agent LLM political discourse systems—that evaluator models reliably keep their assigned adversarial advocate roles—using the TRUST pipeline.
- It introduces an epistemic stance classifier to infer advocate roles from reasoning text and evaluates role fidelity on 60 political statements (30 English, 30 German) with metrics including Role Drift Index (RDI) and Entropy-based Role Stability (ERS).
- Two role-failure modes are identified—Epistemic Floor Effect and Role-Prior Conflict—and they are shown to stem from a single mechanism called Epistemic Role Override (ERO).
- Model and component choices materially change role fidelity: Mistral Large outperforms Claude Sonnet by 28 percentage points (67% vs. 39%), and fact-check provider selection can reduce Claude’s fidelity for German inputs.
- The authors argue that multi-agent LLM validation that omits explicit role-fidelity measurement can systematically misrepresent the epistemic diversity the system is intended to produce.
Related Articles

Why Autonomous Coding Agents Keep Failing — And What Actually Works
Dev.to

Why Enterprise AI Pilots Fail
Dev.to

The PDF Feature Nobody Asked For (That I Use Every Day)
Dev.to

How to Fix OpenClaw Tool Calling Issues
Dev.to

Mistral's new flagship Medium 3.5 folds chat, reasoning, and code into one model
THE DECODER