AI Navigate

RenderMem: Rendering as Spatial Memory Retrieval

arXiv cs.AI / 3/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • RenderMem presents a spatial memory framework that uses rendering as the interface between 3D world representations and spatial reasoning.
  • Unlike memory systems that store fixed observations, RenderMem maintains a 3D scene representation and renders query-conditioned visuals from viewpoints implied by the query.
  • This design enables embodied agents to reason about line-of-sight, visibility, and occlusion from arbitrary perspectives and remains compatible with existing vision-language models without modifying standard architectures.
  • Experiments in the AI2-THOR environment show RenderMem improving viewpoint-dependent visibility and occlusion queries over prior memory baselines.

Abstract

Embodied reasoning is inherently viewpoint-dependent: what is visible, occluded, or reachable depends critically on where the agent stands. However, existing spatial memory systems for embodied agents typically store either multi-view observations or object-centric abstractions, making it difficult to perform reasoning with explicit geometric grounding. We introduce RenderMem, a spatial memory framework that treats rendering as the interface between 3D world representations and spatial reasoning. Instead of storing fixed observations, RenderMem maintains a 3D scene representation and generates query-conditioned visual evidence by rendering the scene from viewpoints implied by the query. This enables embodied agents to reason directly about line-of-sight, visibility, and occlusion from arbitrary perspectives. RenderMem is fully compatible with existing vision-language models and requires no modification to standard architectures. Experiments in the AI2-THOR environment show consistent improvements on viewpoint-dependent visibility and occlusion queries over prior memory baselines.