Interactive Episodic Memory with User Feedback

arXiv cs.CV / 4/29/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces EM-QnF (Episodic Memory with Questions and Feedback) to make episodic memory over long egocentric videos work in interactive, real-world settings rather than a one-shot setup.
  • It lets users correct or add context to the model’s initial answer (e.g., specifying “the big blue mug” and “before this”), which helps resolve ambiguity and incompleteness in natural-language queries.
  • The authors collect datasets focused on feedback-based interactions and propose a lightweight training scheme that avoids costly sequential optimization.
  • They also present a plug-and-play Feedback Alignment Module (FALM) that can be added to existing EM-NLQ models to incorporate user feedback efficiently.
  • Experiments on three challenging benchmarks show significant improvements over prior state of the art, and human-feedback evaluations indicate good generalization to real-world scenarios.

Abstract

In episodic memory with natural language queries (EM-NLQ), a user may ask a question (e.g., "Where did I place the mug?") that requires searching a long egocentric video, captured from the user's perspective, to find the moment that answers it. However, queries can be ambiguous or incomplete, leading to incorrect responses. Current methods ignore this key aspect and address EM-NLQ in a one-shot setup, limiting their applicability in real-world scenarios. In this work, we address this gap and introduce the Episodic Memory with Questions and Feedback task (EM-QnF). Here, the user can provide feedback on the model's initial prediction or add more information (e.g., "Before this. I'm looking for the big blue mug not the white one"), helping the model refine its predictions interactively. To this end, we collect datasets for feedback-based interaction and propose a lightweight training scheme that avoids expensive sequential optimization. We also introduce a plug-and-play Feedback ALignment Module (FALM) that enables existing EM-NLQ models to incorporate user feedback effectively. Our approach significantly improves over the state of the art on three challenging benchmarks and is better than or competitive with commercial large vision-language models while remaining efficient. Evaluation with human-generated feedback shows that it generalizes well to real-world scenarios.