ANDRE: An Attention-based Neuro-symbolic Differentiable Rule Extractor
arXiv cs.AI / 5/7/2026
📰 NewsModels & Research
Key Points
- The paper introduces ANDRE, an attention-based neuro-symbolic differentiable ILP framework designed to learn interpretable first-order logic programs from noisy and probabilistic data.
- It improves scalability and robustness by optimizing over a continuous rule space using fully differentiable conjunction/disjunction operators that approximate logical min-max semantics.
- ANDRE eliminates reliance on predefined rule templates and avoids problematic fuzzy operators, reducing issues like vanishing gradients and poor logical-structure approximation.
- The method supports flexible rule induction by softly selecting, negating, or excluding predicates while preserving symbolic structure for interpretability.
- Experiments across classical ILP benchmarks, large-scale knowledge bases, and synthetic probabilistic/noisy settings show ANDRE matches or exceeds prior performance and more reliably recovers correct symbolic rules under uncertainty.
Related Articles

Why GPU Density Just Broke Two Decades of Data Centre Design Assumptions
Dev.to

From Demos to Guardrails: 10 Reddit Threads Tracking the AI-Agent Shift
Dev.to

What Reddit’s Agent Builders Were Actually Debugging This Week
Dev.to

Meta AI Releases NeuralBench: A Unified Open-Source Framework to Benchmark NeuroAI Models Across 36 EEG Tasks and 94 Datasets
MarkTechPost
Qwen 3.6?
Reddit r/LocalLLaMA