Neural Autoregressive Flows for Markov Boundary Learning
arXiv cs.LG / 2026/3/24
💬 オピニオンIdeas & Deep AnalysisModels & Research
要点
- The paper proposes a new framework for discovering the Markov boundary— the smallest predictor set for a target—by using conditional entropy from information theory as a scoring criterion.
- It introduces a masked autoregressive neural network to model complex variable dependencies more effectively than prior scoring/search approaches.
- The method includes a parallelizable greedy search strategy with polynomial-time behavior and provides analytical evidence toward reliability/guarantees.
- It shows that using learned Markov boundaries to initialize causal discovery can accelerate convergence, improving performance beyond boundary learning alone.
- Experiments on real-world and synthetic datasets indicate the approach is scalable and achieves superior results on both Markov boundary discovery and causal discovery tasks.

