Generating Place-Based Compromises Between Two Points of View

arXiv cs.CL / 4/28/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses how LLMs can produce socially acceptable compromises by aiming for empathically neutral synthesis between two opposing viewpoints.
  • It benchmarks four prompt-engineering approaches on Claude 3 Opus using a dataset of 2,400 contrasting views tied to shared places.
  • A subset of generated compromises was assessed for acceptability with a 50-participant study, showing that using external empathic similarity as iterative feedback performs better than standard Chain-of-Thought prompting.
  • The authors then use the generated compromise data to train two smaller foundation models via margin-based alignment with human preferences, improving efficiency and avoiding the need for explicit empathy estimation at inference time.

Abstract

Large Language Models (LLMs) excel academically but struggle with social intelligence tasks, such as creating good compromises. In this paper, we present methods for generating empathically neutral compromises between two opposing viewpoints. We first compared four different prompt engineering methods using Claude 3 Opus and a dataset of 2,400 contrasting views on shared places. A subset of the gen erated compromises was evaluated for acceptability in a 50-participant study. We found that the best method for generating compromises between two views used external empathic similarity between a compromise and each viewpoint as iterative feedback, outperforming stan dard Chain of Thought (CoT) reasoning. The results indicate that the use of empathic neutrality improves the acceptability of compromises. The dataset of generated compromises was then used to train two smaller foundation models via margin-based alignment of human preferences, improving efficiency and removing the need for empathy estimation during inference.