Trustworthy Clinical Decision Support Using Meta-Predicates and Domain-Specific Languages

arXiv cs.AI / 4/25/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a framework for “trustworthy” clinical decision support that goes beyond verifying syntax and structure by enforcing epistemological appropriateness of evidence used in clinical rules.
  • It introduces meta-predicates—predicates about predicates—paired with an epistemological type system that classifies rule annotations across purpose, knowledge domain, scale, and acquisition method to constrain which evidence types are allowed.
  • The approach is implemented in the open-source AnFiSA platform for genetic variant curation and tested on 5.6 million variants from the Genome in a Bottle benchmark using the Brigham Genomics Medicine protocol.
  • By reformulating decision trees as unate cascades, the method produces per-variant audit trails that explain which rule classified each variant and on what basis, and it can validate both human-written and AI-generated rules before deployment.
  • The framework is positioned as complementary to post-hoc explainability methods like LIME and SHAP by restricting permissible evidence before release while preserving human readability of the decision logic.

Abstract

\textbf{Background:} Regulatory frameworks for AI in healthcare, including the EU AI Act and FDA guidance on AI/ML-based medical devices, require clinical decision support to demonstrate not only accuracy but auditability. Existing formal languages for clinical logic validate syntactic and structural correctness but not whether decision rules use epistemologically appropriate evidence. \textbf{Methods:} Drawing on design-by-contract principles, we introduce meta-predicates -- predicates about predicates -- for asserting epistemological constraints on clinical decision rules expressed in a DSL. An epistemological type system classifies annotations along four dimensions: purpose, knowledge domain, scale, and method of acquisition. Meta-predicates assert which evidence types are permissible in any given rule. The framework is instantiated in AnFiSA, an open-source platform for genetic variant curation, and demonstrated using the Brigham Genomics Medicine protocol on 5.6 million variants from the Genome in a Bottle benchmark. \textbf{Results:} Decision trees used in variant interpretation can be reformulated as unate cascades, enabling per-variant audit trails that identify which rule classified each variant and why. Meta-predicate validation catches epistemological errors before deployment, whether rules are human-written or AI-generated. The approach complements post-hoc methods such as LIME and SHAP: where explanation reveals what evidence was used after the fact, meta-predicates constrain what evidence may be used before deployment, while preserving human readability. \textbf{Conclusions:} Meta-predicate validation is a step toward demonstrating not only that decisions are accurate but that they rest on appropriate evidence in ways that can be independently audited. While demonstrated in genomics, the approach generalises to any domain requiring auditable decision logic.