RoboTAG: End-to-end Robot Configuration Estimation via Topological Alignment Graph

arXiv cs.RO / 4/16/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • RoboTAGは、単眼RGB画像からロボットのポーズ(設定推定)を行うためのエンドツーエンド手法として提案され、2D特徴中心の既存研究の限界に対処します。
  • 2Dと3Dの両方の表現を同時に学習させるために、RoboTAGは3Dブランチと2Dブランチから構成され、カメラ/ロボット状態をノード、変数間の依存関係や整合をエッジとして表現します。
  • グラフ上でクローズドループを定義し、ブランチ間の整合性(consistency)を用いた監督を与えることで、ラベル依存を軽減しつつsim-to-realギャップの緩和を狙います。
  • 3Dプリオルを3Dブランチで注入することで、問題を2Dへ単純化してしまう従来アプローチの欠点を補い、ロボットの種類をまたいで有効性が示されたと報告されています。

Abstract

Estimating robot pose from a monocular RGB image is a challenge in robotics and computer vision. Existing methods typically build networks on top of 2D visual backbones and depend heavily on labeled data for training, which is often scarce in real-world scenarios, causing a sim-to-real gap. Moreover, these approaches reduce the 3D-based problem to 2D domain, neglecting the 3D priors. To address these, we propose Robot Topological Alignment Graph (RoboTAG), which incorporates a 3D branch to inject 3D priors while enabling co-evolution of the 2D and 3D representations, alleviating the reliance on labels. Specifically, the RoboTAG consists of a 3D branch and a 2D branch, where nodes represent the states of the camera and robot system, and edges capture the dependencies between these variables or denote alignments between them. Closed loops are then defined in the graph, on which a consistency supervision across branches can be applied. Experimental results demonstrate that our method is effective across robot types, suggesting new possibilities of alleviating the data bottleneck in robotics.