From Subsumption to Satisfiability: LLM-Assisted Active Learning for OWL Ontologies

arXiv cs.AI / 4/21/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes an LLM-assisted active learning framework for OWL ontologies by reformulating membership queries as subsumption tests in terms of the target ontology.
  • It converts each candidate axiom into a counter-concept, verbalizes it in controlled natural language, and then queries LLMs to obtain real-world examples approximating that counter-concept.
  • The approach is designed to ensure that only Type II errors occur during ontology modeling, meaning it may delay progress but will not introduce logical inconsistencies.
  • Experiments using 13 commercial LLMs indicate that recall (interpreted as Type II error behavior in the proposed framework) stays stable across multiple established ontologies.
  • Overall, the work links description-logic theory (subsumption-to-satisfiability reduction) with practical LLM-driven example generation to improve ontology learning workflows.

Abstract

In active learning, membership queries (MQs) allow a learner to pose questions to a teacher, such as ''Is every apple a fruit?'', to which the teacher responds correctly with yes or no. These MQs can be viewed as subsumption tests with respect to the target ontology. Inspired by the standard reduction of subsumption to satisfiability in description logics, we reformulate each candidate axiom into its corresponding counter-concept and verbalise it in controlled natural language before presenting it to Large Language Models (LLMs). We introduce LLMs as a third component that provides real-world examples approximating an instance of the counter-concept. This design property ensures that only Type II errors may occur in ontology modelling; in the worst case, these errors merely delay the construction process without introducing inconsistencies. Experimental results on 13 commercial LLMs show that recall, corresponding to Type II errors in our framework, remains stable across several well-established ontologies.