Debate to Align: Reliable Entity Alignment through Two-Stage Multi-Agent Debate

arXiv cs.CL / 4/16/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces AgentEA, a reliable entity alignment (EA) framework for matching the same real-world entities across different knowledge graphs, especially when candidate evidence and LLM reasoning quality are uncertain.
  • AgentEA first improves entity embedding quality using entity representation preference optimization before performing alignment.
  • It then applies a two-stage multi-agent debate strategy: a lightweight debate verification stage followed by a deeper debate alignment stage to progressively increase the reliability of alignment decisions.
  • Experiments across multiple challenging benchmark settings—cross-lingual, sparse, large-scale, and heterogeneous—show that AgentEA improves alignment effectiveness compared with prior LLM/embedding-similarity-based approaches.
  • The work targets limitations of existing pipelines where unreliable candidate entity sets (CES) and varying LLM reasoning capability can undermine downstream EA decisions.

Abstract

Entity alignment (EA) aims to identify entities referring to the same real-world object across different knowledge graphs (KGs). Recent approaches based on large language models (LLMs) typically obtain entity embeddings through knowledge representation learning and use embedding similarity to identify an alignment-uncertain entity set. For each uncertain entity, a candidate entity set (CES) is then retrieved based on embedding similarity to support subsequent alignment reasoning and decision making. However, the reliability of the CES and the reasoning capability of LLMs critically affect the effectiveness of subsequent alignment decisions. To address this issue, we propose AgentEA, a reliable EA framework based on multi-agent debate. AgentEA first improves embedding quality through entity representation preference optimization, and then introduces a two-stage multi-role debate mechanism consisting of lightweight debate verification and deep debate alignment to progressively enhance the reliability of alignment decisions while enabling more efficient debate-based reasoning. Extensive experiments on public benchmarks under cross-lingual, sparse, large-scale, and heterogeneous settings demonstrate the effectiveness of AgentEA.