Learning Hybrid-Control Policies for High-Precision In-Contact Manipulation Under Uncertainty

arXiv cs.RO / 4/22/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes hybrid position-force control policies for in-contact manipulation, learning to choose force or position control separately across each control dimension under uncertainty.
  • It introduces Mode-Aware Training for Contact Handling (MATCH), which modifies action probabilities so the learning process explicitly reflects the hybrid controller’s mode-selection behavior.
  • Experiments on fragile peg-in-hole tasks under extreme localization uncertainty show MATCH significantly outperforms pose-only control, with up to 10% higher success rates and 5x fewer peg breaks.
  • The approach matches pose-control policies in data efficiency while using a larger, more complex action space, and it demonstrates strong sim-to-real results including improved success rates in high-noise conditions and reduced applied force versus variable impedance baselines.

Abstract

Reinforcement learning-based control policies have been frequently demonstrated to be more effective than analytical techniques for many manipulation tasks. Commonly, these methods learn neural control policies that predict end-effector pose changes directly from observed state information. For tasks like inserting delicate connectors which induce force constraints, pose-based policies have limited explicit control over force and rely on carefully tuned low-level controllers to avoid executing damaging actions. In this work, we present hybrid position-force control policies that learn to dynamically select when to use force or position control in each control dimension. To improve learning efficiency of these policies, we introduce Mode-Aware Training for Contact Handling (MATCH) which adjusts policy action probabilities to explicitly mirror the mode selection behavior in hybrid control. We validate MATCH's learned policy effectiveness using fragile peg-in-hole tasks under extreme localization uncertainty. We find MATCH substantially outperforms pose-control policies -- solving these tasks with up to 10% higher success rates and 5x fewer peg breaks than pose-only policies under common types of state estimation error. MATCH also demonstrates data efficiency equal to pose-control policies, despite learning in a larger and more complex action space. In over 1600 sim-to-real experiments, we find MATCH succeeds twice as often as pose policies in high noise settings (33% vs.~68%) and applies ~30% less force on average compared to variable impedance policies on a Franka FR3 in laboratory conditions.