Lever: Inference-Time Policy Reuse under Support Constraints

arXiv cs.LG / 4/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies whether reinforcement learning (RL) policies can be reused at inference time by composing a high-quality policy for a new composite objective without any additional environment interaction.
  • It introduces “lever,” an end-to-end framework that retrieves pre-trained policies, scores/evaluates them via behavioral embeddings, and composes them using offline Q-value composition.
  • The authors focus on a support-limited setting where value propagation is not possible, finding that reuse quality depends heavily on how well the available policies cover the relevant transitions.
  • lever includes composition strategies that trade off performance and computation by limiting exploration over candidate policies.
  • Experiments in deterministic GridWorld show offline inference-time composition can match or sometimes exceed training-from-scratch performance with meaningful speedups, but performance drops for long-horizon tasks that would require value propagation.

Abstract

Reinforcement learning (RL) policies are typically trained for fixed objectives, making reuse difficult when task requirements change. We study inference-time policy reuse: given a library of pre-trained policies and a new composite objective, can a high-quality policy be constructed entirely offline, without additional environment interaction? We introduce lever (Leveraging Efficient Vector Embeddings for Reusable policies), an end-to-end framework that retrieves relevant policies, evaluates them using behavioral embeddings, and composes new policies via offline Q-value composition. We focus on the support-limited regime, where no value propagation is possible, and show that the effectiveness of reuse depends critically on the coverage of available transitions. To balance performance and computational cost, lever proposes composition strategies that control the exploration of candidate policies. Experiments in deterministic GridWorld environments show that inference-time composition can match, and in some cases exceed, training-from-scratch performance while providing substantial speedups. At the same time, performance degrades when long-horizon dependencies require value propagation, highlighting a fundamental limitation of offline reuse.