Useless but Safe? Benchmarking Utility Recovery with User Intent Clarification in Multi-Turn Conversations

arXiv cs.CL / 5/1/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • The paper introduces CarryOnBench, an interactive multi-turn benchmark designed to test whether LLMs can recover helpful utility after initially misinterpreting benign user intent while still maintaining safety.
  • Using 398 initially “seemingly harmful” queries and simulating 5,970 conversations across 14 models, the study evaluates both intent-aligned utility and safety over 4–12 turn flows totaling 23,880 model responses.
  • The proposed Ben-Util metric shows that at the first turn, models satisfy only 10.5%–37.6% of the user’s benign information need, but reach 25.1%–72.1% when the benign intent is provided upfront, indicating errors stem from intent misinterpretation rather than limited knowledge.
  • In multi-turn settings with clarifications, 13 of 14 models generally approach the single-turn baseline, but recovery varies by model and exposes three failure modes not seen in single-turn tests: utility lock-in, unsafe recovery, and repetitive recovery.
  • The authors find that multi-turn conversations converge to similar harmfulness levels regardless of how conservative the model begins, highlighting a missing dimension in single-turn safety/robustness evaluations: responsiveness to clarified intent.

Abstract

Current LLM safety alignment techniques improve model robustness against adversarial attacks, but overlook whether and how LLMs can recover helpfulness when benign users clarify their intent. We introduce CarryOnBench, the first interactive benchmark that measures whether LLMs can revise their interpretation of user intent and recover utility, while remaining safe through multi-turn conversations. Starting from 398 seemingly harmful queries with benign underlying intents, we simulate 5,970 conversations by varying user follow-up sequences, evaluating 14 models on both intent-aligned utility and safety. CarryOnBench yields 1,866 different conversation flows of 4--12 turns, totaling 23,880 model responses. We design Ben-Util, a checklist-based metric that evaluates how well each model response fulfills the user's benign information need using atomic items. At turn one, models fulfill only 10.5--37.6% of the user's benign information need. When the same query includes the benign intent upfront, models fulfill 25.1--72.1%, confirming that models withhold information due to intent misinterpretation, not limited knowledge. With benign clarifications in multi-turn conversations, 13 of 14 models approach or exceed this single-turn baseline, yet recovery cost varies across models. We identify three failure modes invisible to single-turn evaluations: utility lock-in, where a model rarely updates despite clarification; unsafe recovery, where a model updates at disproportionate safety cost; and repetitive recovery, where a model recycles prior responses rather than providing new information. Moreover, conversations converge to similar harmfulness levels regardless of how conservative the model starts. These findings expose a gap that single-turn evaluations miss -- whether a model is appropriately cautious or simply unresponsive to clarified user intent.