AI Navigate

Learning to Negotiate: Multi-Agent Deliberation for Collective Value Alignment in LLMs

arXiv cs.CL / 3/12/2026

📰 NewsModels & Research

Key Points

  • A new multi-agent negotiation-based framework is proposed to align LLMs to Collective Agency, improving their ability to handle value conflicts in multi-stakeholder environments.
  • The approach uses two self-play instances of the same LLM with opposing personas that engage in structured turn-based dialogue to synthesize mutually beneficial solutions.
  • Training combines RL from human feedback (RLAIF) with GRPO and an external reward model, applying gradients to dialogue tokens based on final CA scores.
  • Empirical results show the model achieves CA alignment comparable to a single-agent baseline while substantially improving conflict-resolution performance without degrading general language capabilities.
  • The work suggests negotiation-driven deliberation training as a practical path toward LLMs that better support collective decision-making in value-conflict scenarios.

Abstract

The alignment of large language models (LLMs) has progressed substantially in single-agent settings through paradigms such as RLHF and Constitutional AI, with recent work exploring scalable alternatives such as RLAIF and evolving alignment objectives. However, these approaches remain limited in multi-stakeholder settings, where conflicting values arise and deliberative negotiation capabilities are required. This work proposes a multi-agent negotiation-based alignment framework that aligns LLMs to Collective Agency (CA)-an existing alignment objective introduced to promote the continual expansion of agency-while simultaneously improving conflict-resolution capability. To enable scalable training, two self-play instances of the same LLM, assigned opposing personas, engage in structured turn-based dialogue to synthesize mutually beneficial solutions. We generate synthetic moral-dilemma prompts and conflicting persona pairs, and optimize the policy via RLAIF using GRPO with an external LLM reward model. While rewards are computed from CA scores assigned to the final completion, gradients are applied to dialogue tokens to directly improve deliberative interaction dynamics. Experiments show that the resulting model achieves CA alignment comparable to a single-agent baseline while substantially improving conflict-resolution performance without degrading general language capabilities. These results suggest that negotiation-driven deliberation training provides a practical path toward LLMs that better support collective decision-making in value-conflict scenarios.