Act or Escalate? Evaluating Escalation Behavior in Automation with Language Models

arXiv cs.AI / 4/13/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper frames “act vs escalate” in automation as a decision under uncertainty, where an LLM predicts correctness probability and chooses between acting or escalating based on expected costs.
  • Experiments across five domains (forecasting, recommendation, moderation, loan approval, autonomous driving) show that escalation thresholds differ significantly across models and are not explained by architecture or scale, while self-estimates are systematically miscalibrated.
  • The study tests interventions—adjusting cost ratios, providing accuracy signals, and training models to follow escalation rules—and finds prompting helps mainly for reasoning-oriented models.
  • Supervised fine-tuning on chain-of-thought targets for the desired escalation policy produces the most robust behavior and generalizes across datasets, cost ratios, prompt formats, and held-out domains.
  • Overall, the authors argue that escalation behavior is a model-specific characteristic that should be assessed before deployment, and that aligning models to reason about uncertainty and decision costs improves reliability.

Abstract

Effective automation hinges on deciding when to act and when to escalate. We model this as a decision under uncertainty: an LLM forms a prediction, estimates its probability of being correct, and compares the expected costs of acting and escalating. Using this framework across five domains of recorded human decisions-demand forecasting, content recommendation, content moderation, loan approval, and autonomous driving-and across multiple model families, we find marked differences in the implicit thresholds models use to trade off these costs. These thresholds vary substantially and are not predicted by architecture or scale, while self-estimates are miscalibrated in model-specific ways. We then test interventions that target this decision process by varying cost ratios, providing accuracy signals, and training models to follow the desired escalation rule. Prompting helps mainly for reasoning models. SFT on chain-of-thought targets yields the most robust policies, which generalize across datasets, cost ratios, prompt framings, and held-out domains. These results suggest that escalation behavior is a model-specific property that should be characterized before deployment, and that robust alignment benefits from training models to reason explicitly about uncertainty and decision costs.