Robotic Grasping and Placement Controlled by EEG-Based Hybrid Visual and Motor Imagery

arXiv cs.RO / 4/8/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a real-time robotic grasping and placement framework driven by EEG-based hybrid visual imagery (VI) and motor imagery (MI) signals.
  • It uses offline-pretrained EEG decoders in a zero-shot online streaming pipeline to create a dual-channel intent interface: VI selects grasp targets while MI specifies placement poses.
  • The system is implemented on a Base robotic platform and validated across scenarios such as occluded targets and different participant postures using a cue-free imagery protocol.
  • Reported performance includes online decoding accuracies of 40.23% for VI and 62.59% for MI, with an end-to-end task success rate of 20.88%.
  • The authors argue the results show that higher-level visual cognition can be decoded in real time and translated into executable robot commands using EEG-only imagery, supporting practical human-robot collaboration.

Abstract

We present a framework that integrates EEG-based visual and motor imagery (VI/MI) with robotic control to enable real-time, intention-driven grasping and placement. Motivated by the promise of BCI-driven robotics to enhance human-robot interaction, this system bridges neural signals with physical control by deploying offline-pretrained decoders in a zero-shot manner within an online streaming pipeline. This establishes a dual-channel intent interface that translates visual intent into robotic actions, with VI identifying objects for grasping and MI determining placement poses, enabling intuitive control over both what to grasp and where to place. The system operates solely on EEG via a cue-free imagery protocol, achieving integration and online validation. Implemented on a Base robotic platform and evaluated across diverse scenarios, including occluded targets or varying participant postures, the system achieves online decoding accuracies of 40.23% (VI) and 62.59% (MI), with an end-to-end task success rate of 20.88%. These results demonstrate that high-level visual cognition can be decoded in real time and translated into executable robot commands, bridging the gap between neural signals and physical interaction, and validating the flexibility of a purely imagery-based BCI paradigm for practical human-robot collaboration.