A Self-Evolving Framework for Efficient Terminal Agents via Observational Context Compression
arXiv cs.CL / 4/22/2026
📰 NewsDeveloper Stack & InfrastructureModels & Research
Key Points
- The paper argues that long-horizon, terminal-centric agents often keep raw environment feedback in the dialogue history, creating heavy redundancy and causing token costs to grow roughly quadratically with the number of steps.
- It proposes TACO, a plug-and-play Terminal Agent Compression framework that self-evolves by automatically discovering and refining observation compression rules from interaction trajectories.
- Experiments on TerminalBench (TB 1.0 and TB 2.0) and four other terminal-related benchmarks show TACO improves performance across mainstream agent frameworks and strong backbone models.
- Using MiniMax-2.5, TACO boosts benchmark performance on most tasks while cutting token overhead by about 10%.
- On TerminalBench, it yields consistent 1%–4% gains across strong agentic models and improves accuracy by roughly 2%–3% under the same token budget, indicating good generalization of task-aware compression.
Related Articles
Autoencoders and Representation Learning in Vision
Dev.to
Every AI finance app wants your data. I didn’t trust that — so I built my own. Offline.
Dev.to
Control Claude with Just a URL. The Chrome Extension "Send to Claude" Is Incredibly Useful
Dev.to
Google Stitch 2.0: Senior-Level UI in Seconds, But Editing Still Breaks
Dev.to

Now Meta will track what employees do on their computers to train its AI agents
The Verge