MARLIN: Multi-Agent Reinforcement Learning for Incremental DAG Discovery

arXiv cs.LG / 2026/3/24

💬 オピニオンIdeas & Deep AnalysisModels & Research

要点

  • The paper introduces MARLIN, a multi-agent reinforcement learning method aimed at efficiently learning causal structures (directed acyclic graphs, DAGs) from observational data.
  • MARLIN improves online suitability by using a continuous-to-DAG mapping policy for incremental, intra-batch DAG generation along with two complementary RL agents (state-specific and state-invariant) to identify causal relationships.
  • The approach integrates the agents into an incremental learning framework so causal structure discovery can proceed over time rather than as a one-shot process.
  • MARLIN employs a factored action space to increase parallelization efficiency, improving runtime performance.
  • Experiments on both synthetic and real datasets show MARLIN outperforming existing state-of-the-art methods in both effectiveness and efficiency.

Abstract

Uncovering causal structures from observational data is crucial for understanding complex systems and making informed decisions. While reinforcement learning (RL) has shown promise in identifying these structures in the form of a directed acyclic graph (DAG), existing methods often lack efficiency, making them unsuitable for online applications. In this paper, we propose MARLIN, an efficient multi agent RL based approach for incremental DAG learning. MARLIN uses a DAG generation policy that maps a continuous real valued space to the DAG space as an intra batch strategy, then incorporates two RL agents state specific and state invariant to uncover causal relationships and integrates these agents into an incremental learning framework. Furthermore, the framework leverages a factored action space to enhance parallelization efficiency. Extensive experiments on synthetic and real datasets demonstrate that MARLIN outperforms state of the art methods in terms of both efficiency and effectiveness.