AI Navigate

$PA^3$: $\textbf{P}$olicy-$\textbf{A}$ware $\textbf{A}$gent $\textbf{A}$lignment through Chain-of-Thought

arXiv cs.CL / 3/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a multi-stage alignment method that teaches LLMs to recall and apply relevant business policies during chain-of-thought reasoning at inference time, without including the full policy in-context.
  • It introduces a PolicyRecall reward based on the Jaccard score and a Hallucination Penalty for GRPO training to improve policy-grounded reasoning.
  • The approach aims to reduce latency and context-length issues by avoiding lengthy prompts while still adhering to business rules.
  • Empirical results show the best model outperforms baselines by 16 points and similar-model baselines by 3 points, while using 40% fewer words.

Abstract

Conversational assistants powered by large language models (LLMs) excel at tool-use tasks but struggle with adhering to complex, business-specific rules. While models can reason over business rules provided in context, including all policies for every query introduces high latency and wastes compute. Furthermore, these lengthy prompts lead to long contexts, harming overall performance due to the "needle-in-the-haystack" problem. To address these challenges, we propose a multi-stage alignment method that teaches models to recall and apply relevant business policies during chain-of-thought reasoning at inference time, without including the full business policy in-context. Furthermore, we introduce a novel PolicyRecall reward based on the Jaccard score and a Hallucination Penalty for GRPO training. Altogether, our best model outperforms the baseline by 16 points and surpasses comparable in-context baselines of similar model size by 3 points, while using 40% fewer words.