Interpretable and Explainable Surrogate Modeling for Simulations: A State-of-the-Art Survey and Perspectives on Explainable AI for Decision-Making
arXiv cs.AI / 4/17/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that surrogate models, while crucial for reducing the cost of simulating complex systems, often inherit and intensify the “black-box” opacity of the underlying simulators.
- It positions Explainable AI (XAI) as a way to understand how inputs drive physical responses, but notes that common XAI methods face engineering-specific challenges like highly correlated inputs, dynamical systems, and strict reliability requirements.
- The authors present a state-of-the-art survey that connects XAI techniques to different stages of surrogate-modeling workflows for design and exploration, aiming to bridge two historically separate research communities.
- Using examples from both equation-based and agent-based simulations, the survey maps techniques to strengths such as revealing variable interactions and supporting human interpretability.
- It highlights open research problems—especially explainability for dynamical and mixed-variable systems—and outlines a research agenda to embed explainability throughout simulation-driven decision-making workflows.
Related Articles
langchain-anthropic==1.4.1
LangChain Releases

🚀 Anti-Gravity Meets Cloud AI: The Future of Effortless Development
Dev.to

Talk to Your Favorite Game Characters! Mantella Brings AI to Skyrim and Fallout 4 NPCs
Dev.to

AI Will Run Companies. Here's Why That Should Excite You, Not Scare You.
Dev.to

The problem with Big Tech AI pricing (and why 8 countries can't afford to compete)
Dev.to