STRNet: Visual Navigation with Spatio-Temporal Representation through Dynamic Graph Aggregation

arXiv cs.CV / 4/6/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses the limitations of recent learning-based visual navigation methods that use simple visual encoders and temporal pooling, which can discard fine-grained spatial/temporal structure needed for accurate action and progress prediction.
  • It introduces STRNet, a unified spatio-temporal representation framework that extracts features from both first-person image sequences and goal observations and fuses them via a dedicated spatio-temporal fusion module.
  • STRNet performs per-frame spatial graph reasoning, while capturing temporal dynamics through a hybrid temporal shift module paired with multi-resolution difference-aware convolution.
  • Experiments reportedly show consistent improvements in navigation performance and suggest STRNet provides a generalizable visual backbone for goal-conditioned robot control.
  • The authors provide released code for STRNet, enabling others to reproduce and build on the proposed backbone and fusion design.

Abstract

Visual navigation requires the robot to reach a specified goal such as an image, based on a sequence of first-person visual observations. While recent learning-based approaches have made significant progress, they often focus on improving policy heads or decision strategies while relying on simplistic feature encoders and temporal pooling to represent visual input. This leads to the loss of fine-grained spatial and temporal structure, ultimately limiting accurate action prediction and progress estimation. In this paper, we propose a unified spatio-temporal representation framework that enhances visual encoding for robotic navigation. Our approach extracts features from both image sequences and goal observations, and fuses them using the designed spatio-temporal fusion module. This module performs spatial graph reasoning within each frame and models temporal dynamics using a hybrid temporal shift module combined with multi-resolution difference-aware convolution. Experimental results demonstrate that our approach consistently improves navigation performance and offers a generalizable visual backbone for goal-conditioned control. Code is available at \href{https://github.com/hren20/STRNet}{https://github.com/hren20/STRNet}.