AI Navigate

HIPO: Instruction Hierarchy via Constrained Reinforcement Learning

arXiv cs.LG / 3/18/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • HIPO introduces a constrained reinforcement learning framework that treats Hierarchical Instruction Following as a Constrained Markov Decision Process, enforcing system prompts as explicit algorithmic boundaries.
  • The method uses a primal-dual safe RL approach to maximize user utility while remaining within the feasible region defined by the system prompts, addressing multi-objective alignment gaps in RLHF and DPO.
  • Experimental results show improved system compliance and user utility across diverse architectures such as Qwen, Phi, and Llama, indicating robust cross-model applicability.
  • Mechanistic analysis reveals that the constrained optimization naturally shifts attention toward long-range system tokens, supporting reliable LLM deployment in complex workflows.

Abstract

Hierarchical Instruction Following (HIF) refers to the problem of prompting large language models with a priority-ordered stack of instructions. Standard methods like RLHF and DPO typically fail in this problem since they mainly optimize for a single objective, failing to explicitly enforce system prompt compliance. Meanwhile, supervised fine-tuning relies on mimicking filtered, compliant data, which fails to establish the priority asymmetry at the algorithmic level. In this paper, we introduce \textsc{HIPO}, a novel alignment framework that formulates HIF as a Constrained Markov Decision Process. \textsc{HIPO} elevates system prompts from mere input context to strict algorithmic boundaries. Using a primal-dual safe reinforcement learning approach, the algorithm dynamically enforces system prompt compliance as an explicit constraint, maximizing user utility strictly within this feasible region. Extensive evaluations across diverse model architectures (e.g., Qwen, Phi, Llama) demonstrate that \textsc{HIPO} significantly improves both system compliance and user utility. Furthermore, mechanistic analysis reveals that this constrained optimization autonomously drives the model to shift its attention toward long-range system tokens, providing a principled foundation for reliable LLM deployment in complex workflows.