Near-Miss: Latent Policy Failure Detection in Agentic Workflows

arXiv cs.CL / 4/1/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that evaluating LLM-based agentic workflows by checking only the final state against ground truth can miss a subtle failure mode where required policy checks are bypassed but the outcome still appears correct.
  • It introduces a metric to detect “near-misses” (latent policy failures) by analyzing agent conversation traces, focusing on whether tool-calling decisions were sufficiently informed.
  • The method builds on ToolGuard, which translates natural-language policies into executable guard code, to assess the quality of the agent’s intermediate decision process.
  • Experiments on the τ²-verified Airlines benchmark across multiple open and proprietary LLM agents found latent failures in roughly 8–17% of trajectories involving mutating tool calls, despite correct final states.
  • The authors conclude current evaluation methodologies have a blind spot and call for metrics that assess both compliance and the trajectory leading to an outcome, not just the outcome itself.

Abstract

Agentic systems for business process automation often require compliance with policies governing conditional updates to the system state. Evaluation of policy adherence in LLM-based agentic workflows is typically performed by comparing the final system state against a predefined ground truth. While this approach detects explicit policy violations, it may overlook a more subtle class of issues in which agents bypass required policy checks, yet reach a correct outcome due to favorable circumstances. We refer to such cases as \textit{near-misses} or \textit{latent failures}. In this work, we introduce a novel metric for detecting latent policy failures in agent conversations traces. Building on the ToolGuard framework, which converts natural-language policies into executable guard code, our method analyzes agent trajectories to determine whether agent's tool-calling decisions where sufficiently informed. We evaluate our approach on the \tau^2-verified Airlines benchmark across several contemporary open and proprietary LLMs acting as agents. Our results show that latent failures occur in 8-17% of trajectories involving mutating tool calls, even when the final outcome matches the expected ground-truth state. These findings reveal a blind spot in current evaluation methodologies and highlight the need for metrics that assess not only final outcomes but also the decision process leading to them.