Decocted Experience Improves Test-Time Inference in LLM Agents
arXiv cs.AI / 4/7/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper addresses how to improve LLM agent performance without updating model parameters, focusing on test-time inference enhancements that reduce wasted computation and suboptimal exploration.
- It proposes using input context as a complementary scaling axis alongside test-time compute, arguing that the quality of context construction is crucial for guiding agent reasoning.
- The authors introduce and analyze “decocted experience,” a mechanism that extracts the essence of past experience, organizes it coherently, and retrieves salient parts to build better prompts for reasoning and agentic behavior.
- The work systematically studies experience-augmented agents, including how performance scales with accumulated experience, what characterizes effective context, and which data structures support context construction.
- Experiments validate the approach across math reasoning, web browsing, and software engineering tasks, showing that decocted experience improves test-time inference outcomes for LLM agents.
Related Articles

Black Hat Asia
AI Business

OpenAI's pricing is about to change — here's why local AI matters more than ever
Dev.to

Google AI Tells Users to Put Glue on Their Pizza!
Dev.to

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Could it be that this take is not too far fetched?
Reddit r/LocalLLaMA