Memory Over Maps: 3D Object Localization Without Reconstruction

arXiv cs.RO / 2026/3/24

💬 オピニオンSignals & Early TrendsIdeas & Deep AnalysisModels & Research

要点

  • The paper proposes a “map-free” 3D object localization pipeline that avoids building global 3D scene representations (e.g., point clouds, voxel grids, or scene graphs).
  • Instead of reconstruction, it stores a lightweight visual memory of posed RGB-D keyframes and, at query time, retrieves relevant views, re-ranks them with a vision-language model, and performs sparse on-demand 3D estimation via depth backprojection and multi-view fusion.
  • The authors report preprocessing and storage improvements, claiming over two orders of magnitude faster scene indexing than reconstruction-based pipelines while using substantially less storage.
  • They validate the approach on object-goal navigation tasks and find it performs strongly across multiple benchmarks without task-specific training, suggesting dense reconstruction may be unnecessary for object-centric robotic navigation.
  • The work reframes object localization as retrieval and semantic re-ranking over image-based memory, leveraging vision-language reasoning to replace expensive 3D reconstruction steps.

Abstract

Target localization is a prerequisite for embodied tasks such as navigation and manipulation. Conventional approaches rely on constructing explicit 3D scene representations to enable target localization, such as point clouds, voxel grids, or scene graphs. While effective, these pipelines incur substantial mapping time, storage overhead, and scalability limitations. Recent advances in vision-language models suggest that rich semantic reasoning can be performed directly on 2D observations, raising a fundamental question: is a complete 3D scene reconstruction necessary for object localization? In this work, we revisit object localization and propose a map-free pipeline that stores only posed RGB-D keyframes as a lightweight visual memory--without constructing any global 3D representation of the scene. At query time, our method retrieves candidate views, re-ranks them with a vision-language model, and constructs a sparse, on-demand 3D estimate of the queried target through depth backprojection and multi-view fusion. Compared to reconstruction-based pipelines, this design drastically reduces preprocessing cost, enabling scene indexing that is over two orders of magnitude faster to build while using substantially less storage. We further validate the localized targets on downstream object-goal navigation tasks. Despite requiring no task-specific training, our approach achieves strong performance across multiple benchmarks, demonstrating that direct reasoning over image-based scene memory can effectively replace dense 3D reconstruction for object-centric robot navigation. Project page: https://ruizhou-cn.github.io/memory-over-maps/