SEA-Eval: A Benchmark for Evaluating Self-Evolving Agents Beyond Episodic Assessment

arXiv cs.AI / 4/13/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that current LLM-based agents, while strong at episodic task completion, cannot accumulate experience or adapt strategies across tasks due to static toolsets and episodic amnesia.
  • It proposes a more formal Self-Evolving Agent (SEA) definition centered on digital embodiment and continuous cross-task evolution, expanding the SEA paradigm beyond earlier ideas.
  • SEA-Eval is introduced as a new benchmark that evaluates SEA traits using sequential task streams, focusing on intra-task execution reliability and long-term evolutionary performance.
  • The benchmark uses metrics like Success Rate and Token Consumption over time to reveal evolutionary gains that episodic benchmarks miss.
  • Experiments show a major evolutionary bottleneck in state-of-the-art frameworks, where identical success rates can hide up to 31.2× differences in token usage and produce divergent long-term evolutionary trajectories.

Abstract

Current LLM-based agents demonstrate strong performance in episodic task execution but remain constrained by static toolsets and episodic amnesia, failing to accumulate experience or optimize strategies across task boundaries. While the Self-Evolving Agent (SEA) paradigm has been previously proposed, this paper contributes a new formal definition of SEA grounded in digital embodiment and continuous cross-task evolution, and introduces SEA-Eval, the first benchmark designed to evaluate SEA characteristics across two dimensions, intra-task execution reliability and long-term evolutionary performance. By organizing tasks into sequential streams and analyzing Success Rate and Token Consumption over time, SEA-Eval quantifies evolutionary gain and structural stability in ways that existing episodic benchmarks cannot. Empirical evaluations reveal a significant evolutionary bottleneck in current state-of-the-art frameworks, where identical success rates mask up to 31.2 times differences in token consumption and divergent evolutionary trajectories under sequential analysis. SEA-Eval provides a rigorous scientific foundation for advancing agents from mere task executors toward genuinely self-evolving digital entities.