MIND: Multi-agent inference for negotiation dialogue in travel planning

arXiv cs.AI / 3/24/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces MIND (Multi-agent Inference for Negotiation Dialogue) to extend multi-agent debate research to travel-planning negotiations with heterogeneous traveler preferences.
  • MIND uses a Theory-of-Mind-inspired “Strategic Appraisal” phase to infer an opponent’s willingness (w) from linguistic cues, reporting 90.2% accuracy.
  • Experiments show MIND improves over traditional MAD approaches, including a 20.5% gain in High-w Hit and a 30.7% increase in Debate Hit-Rate.
  • LLM-as-a-Judge qualitative evaluation finds higher Rationality (68.8%) and Fluency (72.4%) versus baselines, with an overall win rate of 68.3%.
  • The authors conclude that MIND more faithfully models human negotiation dynamics to reach persuasive consensus for high-stakes constraints.

Abstract

While Multi-Agent Debate (MAD) research has advanced, its efficacy in coordinating complex stakeholder interests such as travel planning remains largely unexplored. To bridge this gap, we propose MIND (Multi-agent Inference for Negotiation Dialogue), a framework designed to simulate realistic consensus-building among travelers with heterogeneous preferences. Grounded in the Theory of Mind (ToM), MIND introduces a Strategic Appraisal phase that infers opponent willingness (w) from linguistic nuances with 90.2% accuracy. Experimental results demonstrate that MIND outperforms traditional MAD frameworks, achieving a 20.5% improvement in High-w Hit and a 30.7% increase in Debate Hit-Rate, effectively prioritizing high-stakes constraints. Furthermore, qualitative evaluations via LLM-as-a-Judge confirm that MIND surpasses baselines in Rationality (68.8%) and Fluency (72.4%), securing an overall win rate of 68.3%. These findings validate that MIND effectively models human negotiation dynamics to derive persuasive consensus.