Characterising LLM-Generated Competency Questions: a Cross-Domain Empirical Study using Open and Closed Models
arXiv cs.AI / 4/20/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- Competency Questions (CQs), used to elicit requirements in ontology engineering, can now be generated at scale with generative AI, but their quality and properties must be characterized across different LLMs.
- The study proposes quantitative measures to systematically compare CQs across dimensions such as readability, relevance to the source text, and structural complexity.
- Experiments generate CQs from predefined use cases and scenarios, then evaluate results across multiple open models (KimiK2-1T, LLama3.1-8B, LLama3.2-3B) and closed models (Gemini 2.5 Pro, GPT 4.1).
- The findings show that LLMs produce CQs with distinct “generation profiles” and that performance varies depending on the specific use case.
- Overall, the paper provides an empirical, cross-domain framework for understanding observable characteristics of LLM-generated competency questions to support more reliable ontology engineering.
Related Articles
From Theory to Reality: Why Most AI Agent Projects Fail (And How Mine Did Too)
Dev.to
GPT-5.4-Cyber: OpenAI's Game-Changer for AI Security and Defensive AI
Dev.to
Building Digital Souls: The Brutal Reality of Creating AI That Understands You Like Nobody Else
Dev.to
Local LLM Beginner’s Guide (Mac - Apple Silicon)
Reddit r/artificial
Is Your Skill Actually Good? Systematically Validating Agent Skills with Evals
Dev.to