Learning to Ask: When LLM Agents Meet Unclear Instruction

arXiv cs.CL / 4/30/2026

💬 OpinionDeveloper Stack & InfrastructureTools & Practical UsageModels & Research

Key Points

  • The paper studies how LLM agents that can call external tools perform when user instructions are imperfect or unclear, using real-world queried instructions to analyze common error patterns.
  • It introduces the Noisy ToolBench (NoisyToolBench), a benchmark designed to stress tool-use under noisy conditions.
  • The authors find that next-token prediction can cause models to arbitrarily fill in missing arguments, which increases the risk of hallucinations.
  • To mitigate this, they propose Ask-when-Needed (AwN), a framework that makes the LLM ask clarifying questions when it encounters obstacles from unclear instructions.
  • They also build ToolEvaluator to automate assessment, and experiments show AwN outperforms existing tool-learning approaches on NoisyToolBench, with code and datasets planned for release.

Abstract

Equipped with the capability to call functions, modern large language models (LLMs) can leverage external tools for addressing a range of tasks unattainable through language skills alone. However, the effective execution of these tools relies heavily not just on the advanced capabilities of LLMs but also on precise user instructions, which often cannot be ensured in the real world. To evaluate the performance of LLMs tool-use under imperfect instructions, we meticulously examine the real-world instructions queried from users, analyze the error patterns, and build a challenging tool-use benchmark called Noisy ToolBench (NoisyToolBench). We find that due to the next-token prediction training objective, LLMs tend to arbitrarily generate the missed argument, which may lead to hallucinations and risks. To address this issue, we propose a novel framework, Ask-when-Needed (AwN), which prompts LLMs to ask questions to users whenever they encounter obstacles due to unclear instructions. Moreover, to reduce the manual labor involved in user-LLM interaction and assess LLMs performance in tool utilization from both accuracy and efficiency perspectives, we design an automated evaluation tool named ToolEvaluator. Our experiments demonstrate that the AwN significantly outperforms existing frameworks for tool learning in the NoisyToolBench. We will release all related code and datasets to support future research.