Frictive Policy Optimization for LLMs: Epistemic Intervention, Risk-Sensitive Control, and Reflective Alignment

arXiv cs.LG / 4/29/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes Frictive Policy Optimization (FPO), a framework that learns LLM policies to decide not just what to say, but when to intervene to manage epistemic and normative risk over time.
  • It reframes alignment as a risk-sensitive epistemic control problem, selecting interventions based on their expected impact on downstream epistemic quality rather than immediate reward.
  • FPO models clarification, verification, challenge, redirection, and refusal as explicit “control actions,” supported by a taxonomy of frictive interventions and a structured friction functional covering multiple alignment failure modes.
  • The approach includes a unified family of methods (e.g., reward shaping, preference pairing, group-relative ranking, and risk-conditioned trust regions) and introduces evaluation metrics focused on epistemic competence and information efficiency.
  • Overall, the work aims to ground algorithmic alignment in epistemic conduct—improving behaviors like calibration, contradiction repair, and refusal proportionality—not only task outcomes.

Abstract

We propose Frictive Policy Optimization (FPO), a framework for learning language model policies that regulate not only what to say, but when and how to intervene in order to manage epistemic and normative risk. Unlike standard alignment methods that optimize surface-level preference or task utility, FPO treats clarification, verification, challenge, redirection, and refusal as explicit control actions whose purpose is to shape the evolution of belief, commitment, and uncertainty over time. We formalize alignment as a risk-sensitive epistemic control problem in which intervention decisions are selected based on their expected effect on downstream epistemic quality rather than on immediate reward alone. We introduce a compact taxonomy of frictive interventions, a structured friction functional that operationalizes multiple alignment failure modes, and a unified family of FPO methods spanning reward shaping, preference pairing, group-relative ranking, and risk-conditioned trust regions. We further propose an evaluation framework that measures epistemic competence directly through clarification behavior, calibration, contradiction repair, refusal proportionality, and information efficiency. Together, these results provide a formal and algorithmic foundation for learning agents that are aligned not only in outcome, but in epistemic conduct.