Improving Coherence and Persistence in Agentic AI for System Optimization

arXiv cs.AI / 3/24/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper identifies two key failure modes in agentic LLM approaches to system optimization: evolutionary neighborhood bias (getting stuck on local optima) and a coherence ceiling (context degradation and weak long-horizon reasoning).
  • It proposes Engram, an agentic researcher architecture that separates long-horizon exploration from the limits of a single context window by using multiple sequential agents.
  • Engram improves persistence by writing code snapshots, logs, and results to a persistent Archive and generating a compact Research Digest that subsequent runs can read with fresh context.
  • The authors report that Engram delivers better performance across domains such as multi-cloud multicast, LLM inference request routing, and database KV cache reuse optimization driven by natural language queries.

Abstract

Designing high-performance system heuristics is a creative, iterative process requiring experts to form hypotheses and execute multi-step conceptual shifts. While Large Language Models (LLMs) show promise in automating this loop, they struggle with complex system problems due to two critical failure modes: evolutionary neighborhood bias and the coherence ceiling. Evolutionary methods often remain trapped in local optima by relying on scalar benchmark scores, failing when coordinated multi-step changes are required. Conversely, existing agentic frameworks suffer from context degradation over long horizons or fail to accumulate knowledge across independent runs. We present Engram, an agentic researcher architecture that addresses these limitations by decoupling long-horizon exploration from the constraints of a single context window. Engram organizes exploration into a sequence of agents that iteratively design, test, and analyze mechanisms. At the conclusion of each run, an agent stores code snapshots, logs, and results in a persistent Archive and distills high-level modeling insights into a compact, persistent Research Digest. Subsequent agents then begin with a fresh context window, reading the Research Digest to build on prior discoveries. We find that Engram exhibits superior performance across diverse domains including multi-cloud multicast, LLM inference request routing, and optimizing KV cache reuse in databases with natural language queries.