Evolve: A Persistent Knowledge Lifecycle for Small Language Models
arXiv cs.LG / 4/28/2026
📰 NewsDeveloper Stack & InfrastructureModels & Research
Key Points
- Evolve proposes a persistent knowledge lifecycle for small local language models by pairing a 2B model with a teacher-compiled, semantically coherent knowledge store that is updated and consolidated over time.
- Instead of fragment retrieval at query time, it stages new knowledge sections when acquired, consolidates them offline via teacher-mediated merging (“sleep consolidation”), and refreshes sections inline when they expire.
- Experiments on 750 benchmark queries (specialist questions, NaturalQuestions, TriviaQA) show accuracy rising from a 20–33% baseline to 60–84% (+40–52 percentage points) while cutting teacher model invocations by more than 50% through cross-query knowledge reuse.
- The store compression achieved after consolidation is 31–33.5% across three benchmarks without sacrificing accuracy, and section-based retrieval outperforms chunk-based retrieval by 5–9 percentage points across all lifecycle conditions.
- The system supports two generation modes—“suppress” (strict section-only, auditable) and “augment” (section-supplemented)—over the same underlying knowledge lifecycle.
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.
Related Articles
How I Automate My Dev Workflow with Claude Code Hooks
Dev.to

Claude Haiku for Low-Cost AI Inference: Patterns from a Horse Racing Prediction System
Dev.to

How We Built an Ambient AI Clinical Documentation Pipeline (and Saved Doctors 8+ Hours a Week)
Dev.to

🦀 PicoClaw Deep Dive — A Field Guide to Building an Ultra-Light AI Agent in Go 🐹
Dev.to

Real-Time Monitoring for AI Agents: Beyond Log Streaming
Dev.to