AI Navigate

Knowledge, Rules and Their Embeddings: Two Paths towards Neuro-Symbolic JEPA

arXiv cs.LG / 3/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces RiJEPA, a neuro-symbolic framework that combines neural predictive architectures with symbolic logic to improve interpretability and robustness.
  • It presents two directions: injecting structured inductive biases via Energy-Based Constraints and a multi-modal dual-encoder to reshape representations toward logical basins, replacing arbitrary correlations with geometry-informed structure.
  • It also relaxes discrete symbolic rules into differentiable logic, using gradient-guided Langevin diffusion to enable continuous rule discovery and various inference capabilities.
  • Empirical evaluations on synthetic topological simulations and a high-stakes clinical use case demonstrate the approach's effectiveness and potential for robust, generative, and interpretable neuro-symbolic representation learning.

Abstract

Modern self-supervised predictive architectures excel at capturing complex statistical correlations from high-dimensional data but lack mechanisms to internalize verifiable human logic, leaving them susceptible to spurious correlations and shortcut learning. Conversely, traditional rule-based inference systems offer rigorous, interpretable logic but suffer from discrete boundaries and NP-hard combinatorial explosion. To bridge this divide, we propose a bidirectional neuro-symbolic framework centered around Rule-informed Joint-Embedding Predictive Architectures (RiJEPA). In the first direction, we inject structured inductive biases into JEPA training via Energy-Based Constraints (EBC) and a multi-modal dual-encoder architecture. This fundamentally reshapes the representation manifold, replacing arbitrary statistical correlations with geometrically sound logical basins. In the second direction, we demonstrate that by relaxing rigid, discrete symbolic rules into a continuous, differentiable logic, we can bypass traditional combinatorial search for new rule generation. By leveraging gradient-guided Langevin diffusion within the rule energy landscape, we introduce novel paradigms for continuous rule discovery, which enable unconditional joint generation, conditional forward and abductive inference, and marginal predictive translation. Empirical evaluations on both synthetic topological simulations and a high-stakes clinical use case confirm the efficacy of our approach. Ultimately, this framework establishes a powerful foundation for robust, generative, and interpretable neuro-symbolic representation learning.