Explainable Planning for Hybrid Systems

arXiv cs.AI / 4/14/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces research on Explainable AI Planning (XAIP) specifically tailored to hybrid systems that model real-world problems more closely than purely abstract settings.
  • It motivates the work by highlighting how automated planning is increasingly used in complex and safety-critical domains such as energy grids, self-driving cars, traffic control, robotics, and healthcare.
  • The study frames explainability as a key unsolved challenge for the planning community, especially as planners are deployed in higher-stakes environments.
  • arXiv is used to announce the study as a new contribution (arXiv:2604.09578v1).

Abstract

The recent advancement in artificial intelligence (AI) technologies facilitates a paradigm shift toward automation. Autonomous systems are fully or partially replacing manually crafted ones. At the core of these systems is automated planning. With the advent of powerful planners, automated planning is now applied to many complex and safety-critical domains, including smart energy grids, self-driving cars, warehouse automation, urban and air traffic control, search and rescue operations, surveillance, robotics, and healthcare. There is a growing need to generate explanations of AI-based systems, which is one of the major challenges the planning community faces today. The thesis presents a comprehensive study on explainable artificial intelligence planning (XAIP) for hybrid systems that capture a representation of real-world problems closely.