Framing Effects in Independent-Agent Large Language Models: A Cross-Family Behavioral Analysis

arXiv cs.AI / 3/23/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper analyzes prompt framing effects on decision-making in independent-agent LLMs across multiple families using a threshold voting task.
  • Two logically equivalent prompts with different framings yielded divergent decision distributions across LLM families, indicating framing effects.
  • Surface linguistic cues can override underlying logical formulations, showing biases beyond formal equivalence.
  • The findings highlight framing as a major bias source in non-interacting multi-agent LLM deployments and have implications for alignment and prompt design.

Abstract

In many real-world applications, large language models (LLMs) operate as independent agents without interaction, thereby limiting coordination. In this setting, we examine how prompt framing influences decisions in a threshold voting task involving individual-group interest conflict. Two logically equivalent prompts with different framings were tested across diverse LLM families under isolated trials. Results show that prompt framing significantly influences choice distributions, often shifting preferences toward risk-averse options. Surface linguistic cues can even override logically equivalent formulations. This suggests that observed behavior reflects a tendency consistent with a preference for instrumental rather than cooperative rationality when success requires risk-bearing. The findings highlight framing effects as a significant bias source in non-interacting multi-agent LLM deployments, informing alignment and prompt design.