Improving Coherence and Persistence in Agentic AI for System Optimization
arXiv cs.AI / 2026/3/24
💬 オピニオンIdeas & Deep AnalysisModels & Research
要点
- The paper identifies two key failure modes in agentic LLM approaches to system optimization: evolutionary neighborhood bias (getting stuck on local optima) and a coherence ceiling (context degradation and weak long-horizon reasoning).
- It proposes Engram, an agentic researcher architecture that separates long-horizon exploration from the limits of a single context window by using multiple sequential agents.
- Engram improves persistence by writing code snapshots, logs, and results to a persistent Archive and generating a compact Research Digest that subsequent runs can read with fresh context.
- The authors report that Engram delivers better performance across domains such as multi-cloud multicast, LLM inference request routing, and database KV cache reuse optimization driven by natural language queries.

