Neuro-Symbolic Learning for Predictive Process Monitoring via Two-Stage Logic Tensor Networks with Rule Pruning
arXiv cs.AI / 3/31/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes a neuro-symbolic framework for predictive process monitoring on sequential event data by embedding domain logic as differentiable constraints using Logic Networks (LTNs).
- It encodes control-flow, temporal, and payload knowledge with Linear Temporal Logic and first-order logic to enforce rule-based relationships relevant to domains like healthcare and fraud/compliance.
- To address a common LTNs tradeoff where logic satisfaction can harm prediction quality, it introduces a two-stage optimization strategy that uses weighted axiom loss in pretraining followed by rule pruning based on satisfaction dynamics.
- Experiments on four real-world event logs show that injecting domain constraints improves predictive performance, and that the two-stage optimization is crucial because naive knowledge integration can degrade results.
- The method is reported to perform especially well in compliance-constrained settings with limited compliant training examples, outperforming purely data-driven baselines while maintaining constraint adherence.
Related Articles
[D] How does distributed proof of work computing handle the coordination needs of neural network training?
Reddit r/MachineLearning

BYOK is not just a pricing model: why it changes AI product trust
Dev.to

AI Citation Registries and Identity Persistence Across Records
Dev.to

Building Real-Time AI Voice Agents with Google Gemini 3.1 Flash Live and VideoSDK
Dev.to

Your Knowledge, Your Model: A Method for Deterministic Knowledge Externalization
Dev.to