ContextWeaver: Selective and Dependency-Structured Memory Construction for LLM Agents
arXiv cs.CL / 4/28/2026
📰 NewsModels & Research
Key Points
- LLM agents can lose important earlier information in long conversations because common context methods (like sliding windows or prompt compression) may omit structured details needed later.
- ContextWeaver proposes a dependency-structured memory that turns an agent’s interaction trace into a graph of reasoning steps, linking each step to the earlier steps it depends on.
- The framework builds compact, reusable summaries along root-to-step reasoning paths, enabling more efficient reuse of logical structure for future actions.
- A lightweight validation layer uses execution feedback to improve the reliability of the selected context.
- On SWE-Bench Verified and Lite, ContextWeaver outperforms a sliding-window baseline in pass@1 while using fewer reasoning steps and fewer tokens.
Related Articles

An improvement of the convergence proof of the ADAM-Optimizer
Dev.to
We built an AI that runs an entire business autonomously. Not a demo. Not a prototype. Actually running. YC-backed, here's what we learned.
Reddit r/artificial
langchain-tests==1.1.7
LangChain Releases
Why isn’t LLM reasoning done in vector space instead of natural language?
Reddit r/LocalLLaMA
llama.cpp's Preliminary SM120 Native NVFP4 MMQ Is Merged
Reddit r/LocalLLaMA