See, Remember, Explore: A Benchmark and Baselines for Streaming Spatial Reasoning

arXiv cs.CV / 3/26/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • Introduces S3-Bench, a new benchmark suite for streaming spatial question answering that requires answers to be based only on observations available up to the query timestamp (temporally grounded evaluation).
  • Proposes an active perception setting in which models must explore (e.g., move/rotate/scan) to acquire missing evidence when the current view is insufficient.
  • Designs S3-Bench with both a scalable simulator (controllable trajectories and exploration actions) and real-world streaming videos to test generalization under practical sensing artifacts.
  • Develops AMF-VLM, enabling bounded-compute streaming spatial reasoning via memory folding (compressing long-horizon observations into structured memory) and action-based exploration.
  • Reports sizable gains over baselines trained on the same data, with 8.8% and 13.3% improvements on the simulated and real S3-Eval splits, while preserving competitive transfer to standard spatial benchmarks.

Abstract

Spatial understanding is fundamental for embodied agents, yet most spatial VLMs and benchmarks remain offline-evaluating post-hoc QA over pre-recorded inputs and overlooking two crucial deployment-critical requirements: long-horizon streaming inference and active perception when the current view is insufficient. To address this gap, we introduce S3-Bench, a benchmark suite for streaming spatial question answering with active exploration, where queries are temporally grounded to specific timestamps and must be answered using only observations available up to that moment. S3-Bench adopts a dual-domain design, combining a scalable simulator with controllable trajectories and exploration actions, and real-world streaming videos that capture practical sensing artifacts for rigorous generalization evaluation. Overall, it spans 10K+ scenes and 26K+ trajectories, with dedicated training (S3-Train) and evaluation (S3-Eval) splits. We further propose AMF-VLM, which supports streaming spatial reasoning under bounded computing via (i) memory folding, which compresses long-horizon observations into compact structured memory, and (ii) active exploration, which outputs explicit actions (e.g. move/rotate/scan) to acquire missing evidence before answering. Extensive experiments demonstrate that, compared to models using identical training data, our approach yields improvements of 8.8% and 13.3% on the simulated and real splits of S3-Eval, respectively, while maintaining competitive transferability to standard spatial benchmarks.