AI Navigate

RetailBench: Evaluating Long-Horizon Autonomous Decision-Making and Strategy Stability of LLM Agents in Realistic Retail Environments

arXiv cs.AI / 3/18/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • RetailBench introduces a high-fidelity benchmark to evaluate long-horizon autonomous decision-making by LLM agents in realistic retail environments with stochastic demand and evolving external conditions.
  • The paper proposes the Evolving Strategy & Execution framework, separating high-level strategic reasoning from low-level action execution to enable adaptive and interpretable strategy evolution over time.
  • Experiments on eight state-of-the-art LLMs show the framework improves operational stability and efficiency compared with baselines, though performance declines as task complexity increases.
  • The results reveal fundamental limitations of current LLMs for long-horizon, multi-factor decision-making, underscoring the need for further research in long-horizon planning under dynamic environments.

Abstract

Large Language Model (LLM)-based agents have achieved notable success on short-horizon and highly structured tasks. However, their ability to maintain coherent decision-making over long horizons in realistic and dynamic environments remains an open challenge. We introduce RetailBench, a high-fidelity benchmark designed to evaluate long-horizon autonomous decision-making in realistic commercial scenarios, where agents must operate under stochastic demand and evolving external conditions. We further propose the Evolving Strategy & Execution framework, which separates high-level strategic reasoning from low-level action execution. This design enables adaptive and interpretable strategy evolution over time. It is particularly important for long-horizon tasks, where non-stationary environments and error accumulation require strategies to be revised at a different temporal scale than action execution. Experiments on eight state-of-the-art LLMs across progressively challenging environments show that our framework improves operational stability and efficiency compared to other baselines. However, performance degrades substantially as task complexity increases, revealing fundamental limitations in current LLMs for long-horizon, multi-factor decision-making.