EVIL: Evolving Interpretable Algorithms for Zero-Shot Inference on Event Sequences and Time Series with LLMs
arXiv cs.LG / 4/20/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces EVIL, an LLM-guided evolutionary search method that evolves simple, fully interpretable Python/NumPy programs for dynamical systems inference without training neural networks on large datasets.
- EVIL performs zero-shot, in-context inference across multiple datasets by evolving a single compact inference function that generalizes across evaluation sets.
- The approach is evaluated on three time- and event-sequence tasks—next-event prediction for temporal point processes, rate matrix estimation for Markov jump processes, and time-series imputation.
- Results indicate the evolved algorithms are often competitive with or outperform state-of-the-art deep learning models while being orders of magnitude faster and maintaining full interpretability.
- The work claims a first-of-its-kind demonstration of LLM-guided program evolution producing one unified inference function across these dynamical-systems problems.
Related Articles

From Theory to Reality: Why Most AI Agent Projects Fail (And How Mine Did Too)
Dev.to

GPT-5.4-Cyber: OpenAI's Game-Changer for AI Security and Defensive AI
Dev.to

Building Digital Souls: The Brutal Reality of Creating AI That Understands You Like Nobody Else
Dev.to
Local LLM Beginner’s Guide (Mac - Apple Silicon)
Reddit r/artificial

Is Your Skill Actually Good? Systematically Validating Agent Skills with Evals
Dev.to