Taming Actor-Observer Asymmetry in Agents via Dialectical Alignment

arXiv cs.CL / 4/22/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper shows that LLM agents using multi-agent role-play and self-reflection/auditing can develop Actor-Observer Asymmetry (AOA), where actors blame external factors for failures while observers blame internal faults.
  • It introduces the Ambiguous Failure Benchmark and finds that merely swapping perspectives triggers AOA in over 20% of cases for most evaluated models.
  • To address this, the authors propose ReTAS (Reasoning via Thesis-Antithesis-Synthesis), trained with dialectical alignment to produce perspective-invariant reasoning.
  • ReTAS combines dialectical chain-of-thought with Group Relative Policy Optimization to help agents reconcile opposing viewpoints into a consensus.
  • Experiments indicate ReTAS reduces attribution inconsistency and improves fault-resolution performance in ambiguous situations.

Abstract

Large Language Model agents have rapidly evolved from static text generators into dynamic systems capable of executing complex autonomous workflows. To enhance reliability, multi-agent frameworks assigning specialized roles are increasingly adopted to enable self-reflection and mutual auditing. While such role-playing effectively leverages domain expert knowledge, we find it simultaneously induces a human-like cognitive bias known as Actor-Observer Asymmetry (AOA). Specifically, an agent acting as an actor (during self-reflection) tends to attribute failures to external factors, whereas an observer (during mutual auditing) attributes the same errors to internal faults. We quantify this using our new Ambiguous Failure Benchmark, which reveals that simply swapping perspectives triggers the AOA effect in over 20% of cases for most models. To tame this bias, we introduce ReTAS (Reasoning via Thesis-Antithesis-Synthesis), a model trained through dialectical alignment to enforce perspective-invariant reasoning. By integrating dialectical chain-of-thought with Group Relative Policy Optimization, ReTAS guides agents to synthesize conflicting viewpoints into an objective consensus. Experiments demonstrate that ReTAS effectively mitigates attribution inconsistency and significantly improves fault resolution rates in ambiguous scenarios.