ADEMA: A Knowledge-State Orchestration Architecture for Long-Horizon Knowledge Synthesis with LLMAgents
arXiv cs.AI / 4/29/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces ADEMA, a knowledge-state orchestration architecture designed to make long-horizon LLM knowledge synthesis more reliable by preventing knowledge-state drift and preserving evidence continuity across rounds.
- ADEMA emphasizes explicit epistemic bookkeeping, dual heterogeneous evaluator governance, adaptive task-mode switching, reputation-shaped resource allocation, and checkpoint-resumable persistence to improve trajectory discipline and cost-quality behavior.
- It also uses segment-level memory condensation, artifact-first assembly, and final-validity checking with a safe fallback rather than relying solely on generic multi-agent runtime assumptions.
- Experiments using an 60-run fixed mechanism matrix show that removing checkpoint/resume caused the only invalid run, especially under interruption-sensitive resume conditions, highlighting recoverable continuity as a key driver.
- The authors conclude that dual evaluation, segment synthesis, and dynamic governance function mainly as supporting control mechanisms that shape progression and outcomes, not as universal prerequisites for success.


