The World Leaks the Future: Harness Evolution for Future Prediction Agents
arXiv cs.AI / 4/20/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper studies “future prediction” by LLM agents, where predictions must be made using only public information available before the final outcome is known.
- It argues that existing methods often rely too much on final outcomes because supervision arrives after resolution, making it harder to track factors, gather/evaluate evidence, and manage uncertainty earlier.
- It introduces “internal feedback,” a signal derived from revisiting the same unresolved question over time and comparing temporal prediction differences to reveal omissions in earlier reasoning.
- The authors propose Milkyway, a self-evolving agent system that keeps the base model fixed but updates a persistent “future prediction harness” using internal feedback during repeated predictions.
- Experiments on FutureX and FutureWorld show Milkyway achieves the top overall scores, substantially improving results (FutureX: 44.07→60.90; FutureWorld: 62.22→77.96).
Related Articles

From Theory to Reality: Why Most AI Agent Projects Fail (And How Mine Did Too)
Dev.to

GPT-5.4-Cyber: OpenAI's Game-Changer for AI Security and Defensive AI
Dev.to

Building Digital Souls: The Brutal Reality of Creating AI That Understands You Like Nobody Else
Dev.to
Local LLM Beginner’s Guide (Mac - Apple Silicon)
Reddit r/artificial

Is Your Skill Actually Good? Systematically Validating Agent Skills with Evals
Dev.to