SERNF: Sample-Efficient Real-World Dexterous Policy Fine-Tuning via Action-Chunked Critics and Normalizing Flows

arXiv cs.RO / 4/7/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces SERFN, a sample-efficient off-policy fine-tuning framework for real-world dexterous manipulation that addresses limited interaction budgets and highly multimodal action distributions.
  • SERFN uses a normalizing-flow (NF) policy to produce exact likelihoods for multimodal action chunks, enabling conservative likelihood-regularized updates that are hard to achieve with diffusion policies during fine-tuning.
  • An action-chunked critic is proposed to evaluate entire action sequences rather than per-step actions, improving credit assignment for chunked execution and long-horizon tasks.
  • Experiments on real robotic hardware for two long-horizon manipulation tasks (scissor-based tape cutting and in-hand cube rotation) show SERFN delivers more stable and sample-efficient adaptation than standard approaches.
  • The authors claim this is the first real-hardware demonstration combining likelihood-based multimodal generative policies with chunk-level value learning for dexterous policy fine-tuning.

Abstract

Real-world fine-tuning of dexterous manipulation policies remains challenging due to limited real-world interaction budgets and highly multimodal action distributions. Diffusion-based policies, while expressive, do not permit conservative likelihood-based updates during fine-tuning because action probabilities are intractable. In contrast, conventional Gaussian policies collapse under multimodality, particularly when actions are executed in chunks, and standard per-step critics fail to align with chunked execution, leading to poor credit assignment. We present SERFN, a sample-efficient off-policy fine-tuning framework with normalizing flow (NF) to address these challenges. The normalizing flow policy yields exact likelihoods for multimodal action chunks, allowing conservative, stable policy updates through likelihood regularization and thereby improving sample efficiency. An action-chunked critic evaluates entire action sequences, aligning value estimation with the policy's temporal structure and improving long-horizon credit assignment. To our knowledge, this is the first demonstration of a likelihood-based, multimodal generative policy combined with chunk-level value learning on real robotic hardware. We evaluate SERFN on two challenging dexterous manipulation tasks in the real world: cutting tape with scissors retrieved from a case, and in-hand cube rotation with a palm-down grasp -- both of which require precise, dexterous control over long horizons. On these tasks, SERFN achieves stable, sample-efficient adaptation where standard methods struggle.