Neural Decision-Propagation for Answer Set Programming

arXiv cs.AI / 5/5/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses a key bottleneck in neuro-symbolic AI by proposing a neural-friendly alternative to ASP pipelines that rely on classical solvers for reasoning over stable model semantics.
  • It introduces Decision-Propagation (DProp), a new method that alternates falsity decisions with truth propagation to compute stable models, showing it captures stable model semantics.
  • Building on DProp, the authors propose Neural DProp (NDProp), which makes the approach differentiable by using neural computation for decisions and fuzzy evaluation for propagations.
  • Experiments evaluate NDProp’s ability to learn decision heuristics and perform neuro-symbolic integration, and the results indicate improved accuracy and scalability versus existing neuro-symbolic methods on benchmarks.

Abstract

Integration of Answer Set Programming (ASP) with neural networks has emerged as a promising tool in Neuro-symbolic AI. While existing approaches extend the capabilities of ASP to real world domains, their reasoning pipelines depend on classical solvers, which is a bottleneck for scalability. To tackle this problem, we propose a new method to compute stable models, called decision-propagation (DProp), which alternates falsity decisions and truth propagations. Successful DProp computations are shown to capture the stable model semantics. We then develop Neural DProp (NDProp), a differentiable extension of DProp with neural computation for decisions and fuzzy evaluation for propagations. We evaluate the capabilities of NDProp for learning decision heuristics as well as neuro-symbolic integration, and compare it with existing neuro-symbolic approaches. The results show that NDProp can learn to efficiently compute stable models, and it improves accuracy and scalability on neuro-symbolic benchmarks.