Interpretable and Explainable Surrogate Modeling for Simulations: A State-of-the-Art Survey and Perspectives on Explainable AI for Decision-Making

arXiv cs.AI / 4/17/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that surrogate models, while crucial for reducing the cost of simulating complex systems, often inherit and intensify the “black-box” opacity of the underlying simulators.
  • It positions Explainable AI (XAI) as a way to understand how inputs drive physical responses, but notes that common XAI methods face engineering-specific challenges like highly correlated inputs, dynamical systems, and strict reliability requirements.
  • The authors present a state-of-the-art survey that connects XAI techniques to different stages of surrogate-modeling workflows for design and exploration, aiming to bridge two historically separate research communities.
  • Using examples from both equation-based and agent-based simulations, the survey maps techniques to strengths such as revealing variable interactions and supporting human interpretability.
  • It highlights open research problems—especially explainability for dynamical and mixed-variable systems—and outlines a research agenda to embed explainability throughout simulation-driven decision-making workflows.

Abstract

The simulation of complex systems increasingly relies on sophisticated but fundamentally opaque computational black-box simulators. Surrogate models play a central role in reducing the computational cost of complex systems simulations across a wide range of scientific and engineering domains. Notwithstanding, they inevitably inherit and often exacerbate this black-box nature, obscuring how input variables drive physical responses. Conversely, Explainable Artificial Intelligence (XAI) offers powerful tools to unpack these models. Yet, XAI methods struggle with engineering-specific constraints, such as highly correlated inputs, dynamical systems, and rigorous reliability requirements. Consequently, surrogate modeling and XAI have largely evolved as distinct fields of research, despite their strong complementarity. To reconnect these approaches, this state-of-the-art survey provides a structured perspective that maps existing XAI techniques onto the various stages of surrogate modeling workflows for design and exploration. To ground this synthesis, we draw upon illustrative applications across both equation-based simulations and agent-based modeling. We survey a broad spectrum of techniques, highlighting their strengths for revealing interactions and supporting human comprehension. Finally, we identify pressing open challenges, including the explainability of dynamical systems and the handling of mixed-variable systems, and propose a research agenda to make explainability a core, embedded element of simulation-driven workflows from model construction through decision-making. By transforming opaque emulators into explainable tools, this agenda empowers practitioners to move beyond accelerating simulations to extracting actionable insights from complex system behaviors.