AI Navigate

Shopping Companion: A Memory-Augmented LLM Agent for Real-World E-Commerce Tasks

arXiv cs.CL / 3/17/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a long-term memory benchmark for shopping tasks spanning 1.2 million real-world products to evaluate memory-aware LLM agents.
  • It proposes Shopping Companion, a unified framework that jointly handles memory retrieval and shopping assistance while supporting user intervention.
  • A dual-reward reinforcement learning strategy with tool-wise rewards is developed to address sparse and discontinuous rewards in multi-turn interactions, enabling effective training.
  • Experimental results show that even strong models like GPT-5 achieve under 70% success on the benchmark, highlighting significant challenges and the value of memory-augmented, end-to-end designs in e-commerce.

Abstract

In e-commerce, LLM agents show promise for shopping tasks such as recommendations, budgeting, and bundle deals, where accurately capturing user preferences from long-term conversations is critical. However, two challenges hinder realizing this potential: (1) the absence of benchmarks for evaluating long-term preference-aware shopping tasks, and (2) the lack of end-to-end optimization due to existing designs that treat preference identification and shopping assistance as separate components. In this paper, we introduce a novel benchmark with a long-term memory setup, spanning two shopping tasks over 1.2 million real-world products, and propose Shopping Companion, a unified framework that jointly tackles memory retrieval and shopping assistance while supporting user intervention. To train such capabilities, we develop a dual-reward reinforcement learning strategy with tool-wise rewards to handle the sparse and discontinuous rewards inherent in multi-turn interactions. Experimental results demonstrate that even state-of-the-art models (such as GPT-5) achieve success rates under 70% on our benchmark, highlighting the significant challenges in this domain. Notably, our lightweight LLM, trained with Shopping Companion, consistently outperforms strong baselines, achieving better preference capture and task performance, which validates the effectiveness of our unified design.