Meta-Adaptive Beam Search Planning for Transformer-Based Reinforcement Learning Control of UAVs with Overhead Manipulators under Flight Disturbances

arXiv cs.RO / 3/30/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper presents a reinforcement-learning control framework for UAVs with overhead manipulators, targeting poor end-effector trajectory tracking caused by wind and attitude disturbances that couple drone and arm motion.
  • It introduces a transformer-based DDQN agent paired with a meta-adaptive, short-horizon beam-search planner that uses the learned transformer critic as a forward estimator to simulate candidate control sequences (SITL-style lookahead) rather than directly applying actions.
  • The method uses value estimates over short state sequences for lookahead, while a DDQN backbone provides one-step targets to stabilize learning.
  • In experiments on a 3-DoF aerial manipulator under identical training conditions, the approach reports a 10.2% reward increase versus baselines and reduces mean tracking error from about 6% to 3%.
  • Under base drift disturbances, the proposed planner maintains more stable tip trajectory tracking (reported as around 5 cm tracking error), outperforming fixed-beam and transformer-only variants.

Abstract

Drones equipped with overhead manipulators offer unique capabilities for inspection, maintenance, and contact-based interaction. However, the motion of the drone and its manipulator is tightly linked, and even small attitude changes caused by wind or control imperfections shift the end-effector away from its intended path. This coupling makes reliable tracking difficult and also limits the direct use of learning-based arm controllers that were originally designed for fixed-base robots. These effects appear consistently in our tests whenever the UAV body experiences drift or rapid attitude corrections. To address this behavior, we develop a reinforcement-learning (RL) framework with a transformer-based double deep Q learning (DDQN), with the core idea of using an adaptive beam-search planner that applies a short-horizon beam search over candidate control sequences using the learned critic as the forward estimator. This allows the controller to anticipate the end-effector's motion through simulated rollouts rather than executing those actions directly on the actual model, realizing a software-in-the-loop (SITL) approach. The lookahead relies on value estimates from a Transformer critic that processes short sequences of states, while a DDQN backbone provides the one-step targets needed to keep the learning process stable. Evaluated on a 3-DoF aerial manipulator under identical training conditions, the proposed meta-adaptive planner shows the strongest overall performance with a 10.2% reward increase, a substantial reduction in mean tracking error (from about 6% to 3%), and a 29.6% improvement in the combined reward-error metric relative to the DDQN baseline. Our method exhibits elevated stability in tracking target tip trajectory (by maintaining 5 cm tracking error) when the drone base exhibits drifts due to external disturbances, as opposed to the fixed-beam and Transformer-only variants.