Internalizing Agency from Reflective Experience
arXiv cs.AI / 3/18/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- LEAFE is a framework that internalizes recovery agency from reflective experience to improve long-horizon agent performance in LLMs.
- It addresses the limitations of outcome-driven post-training methods by leveraging rich environment feedback to prevent distribution sharpening.
- During exploration, the agent summarizes feedback, backtracks to earlier decisions, and explores alternative branches, followed by supervised fine-tuning to distill corrections into the model.
- Empirical results across interactive coding and agentic tasks show LEAFE improves Pass@1 and outperforms GRPO and Early Experience baselines, with up to 14% gains on Pass@128.
Related Articles

Astral to Join OpenAI
Dev.to

PearlOS. We gave swarm intelligence a local desktop environment and code control to self-evolve. Has been pretty incredible to see so far. Open source and free if you want your own.
Reddit r/LocalLLaMA

Why Data is Important for LLM
Dev.to

The Inference Market Is Consolidating. Agent Payments Are Still Nobody's Problem.
Dev.to

YouTube's Deepfake Shield for Politicians Changes Evidence Forever
Dev.to