AI Navigate

Int3DNet: Scene-Motion Cross Attention Network for 3D Intention Prediction in Mixed Reality

arXiv cs.CV / 3/17/2026

📰 NewsModels & Research

Key Points

  • Int3DNet proposes a scene-aware network that predicts 3D intention areas directly from scene geometry and head-hand motion cues in Mixed Reality.
  • The model uses a cross-attention fusion of sparse motion cues and scene point clouds to interpret user spatial intention without relying on explicit object-level perception.
  • It is evaluated on MoGaze and CIRCLE datasets, showing consistent 3D intention prediction performance across time horizons up to 1500 ms and outperforming baselines in diverse and unseen scenes.
  • The authors demonstrate practical usability with an efficient visual question answering demonstration based on intention areas, showcasing proactive MR interaction.

Abstract

We propose Int3DNet, a scene-aware network that predicts 3D intention areas directly from scene geometry and head-hand motion cues, enabling robust human intention prediction without explicit object-level perception. In Mixed Reality (MR), intention prediction is critical as it enables the system to anticipate user actions and respond proactively, reducing interaction delays and ensuring seamless user experiences. Our method employs a cross attention fusion of sparse motion cues and scene point clouds, offering a novel approach that directly interprets the user's spatial intention within the scene. We evaluated Int3DNet on MoGaze and CIRCLE datasets, which are public datasets for full-body human-scene interactions, showing consistent performance across time horizons of up to 1500 ms and outperforming the baselines, even in diverse and unseen scenes. Moreover, we demonstrate the usability of proposed method through a demonstration of efficient visual question answering (VQA) based on intention areas. Int3DNet provides reliable 3D intention areas derived from head-hand motion and scene geometry, thus enabling seamless interaction between humans and MR systems through proactive processing of intention areas.