Offline RL for Adaptive Policy Retrieval in Prior Authorization

arXiv cs.LG / 4/8/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • Prior authorization (PA) policy retrieval is framed as a sequential decision process (MDP) where an agent adaptively selects additional policy chunks or stops to issue a decision, optimizing accuracy versus retrieval cost.
  • The approach is trained in an offline RL setting using Conservative Q-Learning (CQL), Implicit Q-Learning (IQL), and Direct Preference Optimization (DPO) on logged trajectories generated from baseline top-K retrieval strategies over synthetic PA requests.
  • On a dataset of 186 policy chunks across 10 CMS procedures, CQL reaches 92% decision accuracy, improving by 30 percentage points over the best fixed-K baseline using exhaustive retrieval.
  • IQL achieves baseline-matching accuracy while reducing retrieval steps by 44% and is the only method with positive episodic return, indicating better efficiency under the learned stopping behavior.
  • Transition-level DPO matches CQL’s 92% accuracy while using about 47% fewer retrieval steps, and results depend strongly on the retrieval step cost λ (selective vs exhaustive behavior changes around λ=0.2).

Abstract

Prior authorization (PA) requires interpretation of complex and fragmented coverage policies, yet existing retrieval-augmented systems rely on static top-K strategies with fixed numbers of retrieved sections. Such fixed retrieval can be inefficient and gather irrelevant or insufficient information. We model policy retrieval for PA as a sequential decision-making problem, formulating adaptive retrieval as a Markov Decision Process (MDP). In our system, an agent iteratively selects policy chunks from a top-K candidate set or chooses to stop and issue a decision. The reward balances decision correctness against retrieval cost, capturing the trade-off between accuracy and efficiency. We train policies using Conservative Q-Learning (CQL), Implicit Q-Learning (IQL), and Direct Preference Optimization (DPO) in an offline RL setting on logged trajectories generated from baseline retrieval strategies over synthetic PA requests derived from publicly available CMS coverage data. On a corpus of 186 policy chunks spanning 10 CMS procedures, CQL achieves 92% decision accuracy (+30 percentage points over the best fixed-K baseline) via exhaustive retrieval, while IQL matches the best baseline accuracy using 44% fewer retrieval steps and achieves the only positive episodic return among all policies. Transition-level DPO matches CQL's 92% accuracy while using 47% fewer retrieval steps (10.6 vs. 20.0), occupying a "selective-accurate" region on the Pareto frontier that dominates both CQL and BC. A behavioral cloning baseline matches CQL, confirming that advantage-weighted or preference-based policy extraction is needed to learn selective retrieval. Lambda ablation over step costs \lambda \in \{0.05, 0.1, 0.2\} reveals a clear accuracy-efficiency inflection: only at \lambda = 0.2 does CQL transition from exhaustive to selective retrieval.