Ask or Assume? Uncertainty-Aware Clarification-Seeking in Coding Agents

arXiv cs.CL / 3/30/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies how LLM-based coding agents should handle underspecified instructions by comparing clarification-seeking (“ask”) versus autonomous guessing (“assume”).
  • It introduces an uncertainty-aware multi-agent scaffold that separates underspecification detection from code execution, rather than having a single agent handle both.
  • Evaluated on an underspecified variant of SWE-bench Verified, the OpenHands + Claude Sonnet 4.5 setup reaches a 69.40% task resolve rate versus 61.20% for a standard single-agent approach.
  • The multi-agent method shows well-calibrated uncertainty, asking fewer questions on easy tasks while proactively querying when issues are more complex.
  • The authors argue the approach can make agents more like proactive collaborators by independently recognizing when missing context should be clarified with the user.

Abstract

As Large Language Model (LLM) agents are increasingly deployed in open-ended domains like software engineering, they frequently encounter underspecified instructions that lack crucial context. While human developers naturally resolve underspecification by asking clarifying questions, current agents are largely optimized for autonomous execution. In this work, we systematically evaluate the clarification-seeking abilities of LLM agents on an underspecified variant of SWE-bench Verified. We propose an uncertainty-aware multi-agent scaffold that explicitly decouples underspecification detection from code execution. Our results demonstrate that this multi-agent system using OpenHands + Claude Sonnet 4.5 achieves a 69.40% task resolve rate, significantly outperforming a standard single-agent setup (61.20%) and closing the performance gap with agents operating on fully specified instructions. Furthermore, we find that the multi-agent system exhibits well-calibrated uncertainty, conserving queries on simple tasks while proactively seeking information on more complex issues. These findings indicate that current models can be turned into proactive collaborators, where agents independently recognize when to ask questions to elicit missing information in real-world, underspecified tasks.