Cortical Policy: A Dual-Stream View Transformer for Robotic Manipulation

arXiv cs.RO / 3/24/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes “Cortical Policy,” a dual-stream view transformer for robotic manipulation that jointly reasons from static-view and dynamic-view inputs rather than using view-specific static features alone.
  • A static-view stream improves 3D spatial understanding by aligning features of geometrically consistent keypoints extracted with help from a pretrained 3D foundation model.
  • A dynamic-view stream uses position-aware pretraining of an egocentric gaze estimation model to enable adaptive, motion-relevant reasoning, inspired by the human cortical dorsal pathway.
  • The integrated representations from both streams produce language-conditioned actions, and experiments on RLBench, COLOSSEUM, and real-world tasks show substantial gains over state-of-the-art baselines.
  • The authors argue that the cortex-inspired dual-stream design addresses prior limitations in 3D spatial reasoning and dynamic adaptation, with potential for wider vision-based robot control applications.

Abstract

View transformers process multi-view observations to predict actions and have shown impressive performance in robotic manipulation. Existing methods typically extract static visual representations in a view-specific manner, leading to inadequate 3D spatial reasoning ability and a lack of dynamic adaptation. Taking inspiration from how the human brain integrates static and dynamic views to address these challenges, we propose Cortical Policy, a novel dual-stream view transformer for robotic manipulation that jointly reasons from static-view and dynamic-view streams. The static-view stream enhances spatial understanding by aligning features of geometrically consistent keypoints extracted from a pretrained 3D foundation model. The dynamic-view stream achieves adaptive adjustment through position-aware pretraining of an egocentric gaze estimation model, computationally replicating the human cortical dorsal pathway. Subsequently, the complementary view representations of both streams are integrated to determine the final actions, enabling the model to handle spatially-complex and dynamically-changing tasks under language conditions. Empirical evaluations on RLBench, the challenging COLOSSEUM benchmark, and real-world tasks demonstrate that Cortical Policy outperforms state-of-the-art baselines substantially, validating the superiority of dual-stream design for visuomotor control. Our cortex-inspired framework offers a fresh perspective for robotic manipulation and holds potential for broader application in vision-based robot control.