World2Rules: A Neuro-Symbolic Framework for Learning World-Governing Safety Rules for Aviation

arXiv cs.RO / 4/1/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces World2Rules, a neuro-symbolic framework that learns formal “world-governing” aviation safety rules from multimodal operational data plus crash/incident reports.
  • It uses neural models to propose candidate symbolic facts from noisy text/visual inputs, then applies inductive logic programming as a verification layer for stronger formal grounding.
  • A hierarchical reflective reasoning process enforces consistency across examples, subsets, and rules, filtering unreliable evidence and pruning unsupported hypotheses to limit error propagation.
  • In evaluations on real-world aviation safety data, World2Rules improves rule-learning performance, achieving higher F1 than purely neural and single-pass neuro-symbolic baselines, while producing compact, interpretable first-order logic.
  • The approach targets safety-critical suitability by combining interpretability and formal analysis with robustness to noisy, inconsistent, and sparse failure-case evidence.

Abstract

Many real-world safety-critical systems are governed by explicit rules that define unsafe world configurations and constrain agent interactions. In practice, these rules are complex and context-dependent, making manual specification incomplete and error-prone. Learning such rules from real-world multimodal data is further challenged by noise, inconsistency, and sparse failure cases. Neural models can extract structure from text and visual data but lack formal guarantees, while symbolic methods provide verifiability yet are brittle when applied directly to imperfect observations. We present World2Rules, a neuro-symbolic framework for learning world-governing safety rules from real-world multimodal aviation data. World2Rules learns from both nominal operational data and aviation crash and incident reports, treating neural models as proposal mechanisms for candidate symbolic facts and inductive logic programming as a verification layer. The framework employs hierarchical reflective reasoning, enforcing consistency across examples, subsets, and rules to filter unreliable evidence, aggregate only mutually consistent components, and prune unsupported hypotheses. This design limits error propagation from noisy neural extractions and yields compact, interpretable first-order logic rules that characterize unsafe world configurations. We evaluate World2Rules on real-world aviation safety data and show that it learns rules that achieve 23.6% higher F1 score than purely neural and 43.2% higher F1 score than single-pass neuro-symbolic baseline, while remaining suitable for safety-critical reasoning and formal analysis.