LongBench: Evaluating Robotic Manipulation Policies on Real-World Long-Horizon Tasks

arXiv cs.RO / 4/21/2026

📰 NewsModels & Research

Key Points

  • The paper introduces LongBench, a real-world benchmark with 1,000+ robotic manipulation episodes to study why long-horizon policies degrade during extended execution.
  • LongBench covers two evaluation regimes—Context-Independent (fully observable) and Context-Dependent (ambiguity-driven)—to separate different sources of temporal difficulty.
  • The benchmark organizes tasks into capability- and ambiguity-specific subsets, enabling mechanism-aware analysis of robustness, temporal consistency, and context-dependent reasoning.
  • Experiments with six state-of-the-art policies show that long-horizon performance is influenced by multiple factors rather than a single dominant cause.
  • In fully observable settings, execution robustness correlates more strongly with performance, while context-related difficulty varies by task and is not consistently improved by memory-based methods.

Abstract

Robotic manipulation policies often degrade over extended horizons, yet existing benchmarks provide limited insight into why such failures occur. Most prior benchmarks are either simulation-based or report aggregate success, making it difficult to disentangle the distinct sources of temporal difficulty in real-world execution. We introduce LongBench, a real-world benchmark for evaluating long-horizon manipulation. LongBench consists of over 1,000 real-world episodes, covering two complementary regimes: Context-Independent (fully observable) and Context-Dependent (ambiguity-driven). By organizing tasks into capability- and ambiguity-specific subsets, LongBench enables mechanism-aware evaluation of execution robustness, temporal consistency, and context-dependent reasoning. Evaluating six state-of-the-art policies reveals that long-horizon performance is not governed by a single factor. We observe that performance in fully observable settings is more strongly associated with execution robustness, while contextual difficulty varies across tasks and is not consistently improved by memory-based methods. We hope that LongBench serves as a useful benchmark for studying long-horizon manipulation and for developing policies with stronger robustness across both execution and contextual challenges.