Sustainable Transfer Learning for Adaptive Robot Skills

arXiv cs.RO / 4/9/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies sustainable robot learning by reusing experience through policy transfer across different robotic platforms using reinforcement learning for a peg-in-hole task.
  • Policies trained on two distinct robots are evaluated in three settings: zero-shot transfer, fine-tuning after transfer, and training from scratch on the target platform.
  • Zero-shot transfer yields lower task success rates and longer execution times compared with approaches that adapt the transferred policy.
  • Fine-tuning the transferred policy substantially boosts performance while requiring fewer training time-steps than learning from scratch.
  • The authors conclude that transfer plus adaptation improves sample efficiency and generalization, helping reduce costly retraining for more sustainable robotic skill development.

Abstract

Learning robot skills from scratch is often time-consuming, while reusing data promotes sustainability and improves sample efficiency. This study investigates policy transfer across different robotic platforms, focusing on peg-in-hole task using reinforcement learning (RL). Policy training is carried out on two different robots. Their policies are transferred and evaluated for zero-shot, fine-tuning, and training from scratch. Results indicate that zero-shot transfer leads to lower success rates and relatively longer task execution times, while fine-tuning significantly improves performance with fewer training time-steps. These findings highlight that policy transfer with adaptation techniques improves sample efficiency and generalization, reducing the need for extensive retraining and supporting sustainable robotic learning.