HiL-Bench (Human-in-Loop Benchmark): Do Agents Know When to Ask for Help?

arXiv cs.AI / 4/13/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • 研究は、コーディングエージェントが不完全・曖昧な仕様に直面したときに性能が落ちる主因を「能力不足」ではなく「いつ自律行動し、いつ助けを求めるか」という判断力の欠如だと指摘しています。
  • その失敗モードを測るために、進行中の探索でのみ表面化する人手で検証されたブロッカーを含む新ベンチマークHiL-Bench(Human-in-the-Loop Benchmark)を提案し、Ask-F1(質問の精度とブロッカー再現率の調和平均)で選択的エスカレーションを評価します。
  • 現行の最先端モデルでは、フルコンテキスト時の性能のごく一部しか「助けを求めるべきか」の判断で回復できず、普遍的な“判断ギャップ”が観測されています。
  • 分析により、過信して誤った信念を維持する/ギャップ検知できない、あるいは不確実性は検出しても誤りを繰り返す、または広く曖昧にエスカレーションして自己修正できない、という3つの典型パターンが特定されています。
  • Ask-F1に基づく強化学習(shaped reward)で、32Bモデルが助けを求める品質とタスク達成率の両方を改善し、その効果がSWEとテキスト-to-SQLの間で転移することが示されました。

Abstract

Frontier coding agents solve complex tasks when given complete context but collapse when specifications are incomplete or ambiguous. The bottleneck is not raw capability, but judgment: knowing when to act autonomously and when to ask for help. Current benchmarks are blind to this failure mode. They supply unambiguous detailed instructions and solely reward execution correctness, so an agent that makes a lucky guess for a missing requirement will score identically to one that would have asked to be certain. We present HiL-Bench (Human-in-the-Loop Benchmark) to measure this selective escalation skill. Each task contains human-validated blockers (missing information, ambiguous requests, contradictory information) that surface only through progressive exploration, not upfront inspection. Our core metric, Ask-F1, the harmonic mean of question precision and blocker recall, captures the tension between over-asking and silent guessing; its structure architecturally prevents gaming through question spam. Evaluation across SWE and text-to-SQL domains reveals a large universal judgment gap: no frontier model recovers more than a fraction of its full-information performance when deciding whether to ask. Failure analysis identifies three key help-seeking patterns: overconfident wrong beliefs with no gap detection; high uncertainty detection yet persistent errors; broad, imprecise escalation without self-correction. These consistent patterns confirm poor help-seeking is a model-level flaw, not task-specific. RL training on shaped Ask-F1 reward shows judgment is trainable: a 32B model improves both help-seeking quality and task pass rate, with gains that transfer across domains. The model does not learn domain-specific heuristics for when to ask; it learns to detect unresolvable uncertainty and act on it.