OCR-Memory: Optical Context Retrieval for Long-Horizon Agent Memory

arXiv cs.CL / 4/30/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces OCR-Memory, a new memory framework for autonomous LLM agents operating in long-horizon, interactive environments where effective reuse of past experience is critical.
  • Unlike conventional text-context-limited memories that suffer from token costs or information loss, OCR-Memory encodes long trajectories as images with unique visual identifiers to enable retention with minimal retrieval-time prompt overhead.
  • Retrieval uses a locate-and-transcribe approach that selects relevant visual regions via anchors and then fetches the corresponding verbatim text, avoiding free-form generation and reducing hallucination risk.
  • Experiments on long-horizon agent benchmarks report consistent improvements under strict context limits, indicating optical encoding increases effective memory capacity while preserving evidence fidelity.

Abstract

Autonomous LLM agents increasingly operate in long-horizon, interactive settings where success depends on reusing experience accumulated over extended histories. However, existing agent memory systems are fundamentally constrained by text-context budgets: storing or revisiting raw trajectories is prohibitively token-expensive, while summarization and text-only retrieval trade token savings for information loss and fragmented evidence. To address this limitation, we propose Optical Context Retrieval Memory (OCR-Memory), a memory framework that leverages the visual modality as a high-density representation of agent experience, enabling retention of arbitrarily long histories with minimal prompt overhead at retrieval time. Specifically, OCR-Memory renders historical trajectories into images annotated with unique visual identifiers. OCR-Memory retrieves stored experience via a \emph{locate-and-transcribe} paradigm that selects relevant regions through visual anchors and retrieves the corresponding verbatim text, avoiding free-form generation and reducing hallucination. Experiments on long-horizon agent benchmarks show consistent gains under strict context limits, demonstrating that optical encoding increases effective memory capacity while preserving faithful evidence recovery.