From Subsumption to Satisfiability: LLM-Assisted Active Learning for OWL Ontologies
arXiv cs.AI / 4/21/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes an LLM-assisted active learning framework for OWL ontologies by reformulating membership queries as subsumption tests in terms of the target ontology.
- It converts each candidate axiom into a counter-concept, verbalizes it in controlled natural language, and then queries LLMs to obtain real-world examples approximating that counter-concept.
- The approach is designed to ensure that only Type II errors occur during ontology modeling, meaning it may delay progress but will not introduce logical inconsistencies.
- Experiments using 13 commercial LLMs indicate that recall (interpreted as Type II error behavior in the proposed framework) stays stable across multiple established ontologies.
- Overall, the work links description-logic theory (subsumption-to-satisfiability reduction) with practical LLM-driven example generation to improve ontology learning workflows.
Related Articles

¿Hasta qué punto podría la IA reemplazarnos en nuestros trabajos? A veces creo que la gente exagera un poco.
Reddit r/artificial

Magnificent irony as Meta staff unhappy about running surveillance software on work PCs
The Register

ETHENEA (ETHENEA Americas LLC) Analyst View: Asset Allocation Resilience in the 2026 Global Macro Cycle
Dev.to

DEEPX and Hyundai Are Building Generative AI Robots
Dev.to

Stop Paying OpenAI to Read Garbage: The Two-Stage Agent Pipeline
Dev.to