Trustworthy Clinical Decision Support Using Meta-Predicates and Domain-Specific Languages
arXiv cs.AI / 4/25/2026
📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes a framework for “trustworthy” clinical decision support that goes beyond verifying syntax and structure by enforcing epistemological appropriateness of evidence used in clinical rules.
- It introduces meta-predicates—predicates about predicates—paired with an epistemological type system that classifies rule annotations across purpose, knowledge domain, scale, and acquisition method to constrain which evidence types are allowed.
- The approach is implemented in the open-source AnFiSA platform for genetic variant curation and tested on 5.6 million variants from the Genome in a Bottle benchmark using the Brigham Genomics Medicine protocol.
- By reformulating decision trees as unate cascades, the method produces per-variant audit trails that explain which rule classified each variant and on what basis, and it can validate both human-written and AI-generated rules before deployment.
- The framework is positioned as complementary to post-hoc explainability methods like LIME and SHAP by restricting permissible evidence before release while preserving human readability of the decision logic.
Related Articles
Navigating WooCommerce AI Integrations: Lessons for Agencies & Developers from a Bluehost Conflict
Dev.to

One Day in Shenzhen, Seen Through an AI's Eyes
Dev.to

Underwhelming or underrated? DeepSeek V4 shows “impressive” gains
SCMP Tech

Claude Code: Hooks, Subagents, and Skills — Complete Guide
Dev.to

Finding the Gold: An AI Framework for Highlight Detection
Dev.to