Pixel-level Scene Understanding in One Token: Visual States Need What-is-Where Composition

arXiv cs.RO / 3/26/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • この研究は、ロボットが連続動画から得る視覚状態表現について「何が・どこにあるか(what-is-where)」を明示的にエンコードすることが、時系列の微妙な変化を捉える鍵だと主張している。
  • 提案手法CroBoは、参照画像をコンパクトなボトルネック“1トークン”に圧縮し、そのグローバル文脈を使ってローカルに大きくマスクされた領域を復元する「global-to-local再構成」学習を行う。
  • 学習されたボトルネックトークンは、シーン内の要素の意味的な同一性・空間位置・構成(configuration)を含むきめ細かな表現を獲得し、観測間での要素の移動や相互作用を追跡できるとしている。
  • ロボット向けの視覚ポリシー学習ベンチマークでSOTA性能を達成したほか、復元分析や知覚的整合性(perceptual straightness)の実験でピクセルレベルのシーン構図が保持されることを示した。

Abstract

For robotic agents operating in dynamic environments, learning visual state representations from streaming video observations is essential for sequential decision making. Recent self-supervised learning methods have shown strong transferability across vision tasks, but they do not explicitly address what a good visual state should encode. We argue that effective visual states must capture what-is-where by jointly encoding the semantic identities of scene elements and their spatial locations, enabling reliable detection of subtle dynamics across observations. To this end, we propose CroBo, a visual state representation learning framework based on a global-to-local reconstruction objective. Given a reference observation compressed into a compact bottleneck token, CroBo learns to reconstruct heavily masked patches in a local target crop from sparse visible cues, using the global bottleneck token as context. This learning objective encourages the bottleneck token to encode a fine-grained representation of scene-wide semantic entities, including their identities, spatial locations, and configurations. As a result, the learned visual states reveal how scene elements move and interact over time, supporting sequential decision making. We evaluate CroBo on diverse vision-based robot policy learning benchmarks, where it achieves state-of-the-art performance. Reconstruction analyses and perceptual straightness experiments further show that the learned representations preserve pixel-level scene composition and encode what-moves-where across observations. Project page available at: https://seokminlee-chris.github.io/CroBo-ProjectPage.