GOLDMARK: Governed Outcome-Linked Diagnostic Model Assessment Reference Kit

arXiv cs.CV / 3/24/2026

📰 NewsDeveloper Stack & InfrastructureTools & Practical UsageModels & Research

Key Points

  • The article introduces GOLDMARK, a standardized benchmarking framework for computational biomarkers derived from H&E whole-slide images using AI/pathology foundation models (PFMs).
  • GOLDMARK addresses gaps in computational pathology by releasing structured intermediate representations (e.g., tile coordinate maps, per-slide feature embeddings), quality-control metadata, predefined patient splits, and standardized evaluation outputs.
  • Models are trained on a curated TCGA cohort with clinically actionable OncoKB level 1–3 labels and evaluated on an independent MSKCC cohort with reciprocal testing to assess cross-site generalization.
  • Across 33 tumor-biomarker tasks, the reported mean AUROC is 0.689 on TCGA and 0.630 on MSKCC, improving to 0.831/0.801 when focusing on the eight highest-performing tasks.
  • The study finds that encoder-to-encoder differences are modest compared with task-specific variability, and that the strongest tasks align with known morphology-genomics associations, supporting reproducible method comparison for clinical-grade deployment.

Abstract

Computational biomarkers (CBs) are histopathology-derived patterns extracted from hematoxylin-eosin (H&E) whole-slide images (WSIs) using artificial intelligence (AI) to predict therapeutic response or prognosis. Recently, slide-level multiple-instance learning (MIL) with pathology foundation models (PFMs) has become the standard baseline for CB development. While these methods have improved predictive performance, computational pathology lacks standardized intermediate data formats, provenance tracking, checkpointing conventions, and reproducible evaluation metrics required for clinical-grade deployment. We introduce GOLDMARK (https://artificialintelligencepathology.org), a standardized benchmarking framework built on a curated TCGA cohort with clinically actionable OncoKB level 1-3 biomarker labels. GOLDMARK releases structured intermediate representations, including tile coordinate maps, per-slide feature embeddings from canonical PFMs, quality-control metadata, predefined patient-level splits, trained slide-level models, and evaluation outputs. Models are trained on TCGA and evaluated on an independent MSKCC cohort with reciprocal testing. Across 33 tumor-biomarker tasks, mean AUROC was 0.689 (TCGA) and 0.630 (MSKCC). Restricting to the eight highest-performing tasks yielded mean AUROCs of 0.831 and 0.801, respectively. These tasks correspond to established morphologic-genomic associations (e.g., LGG IDH1, COAD MSI/BRAF, THCA BRAF/NRAS, BLCA FGFR3, UCEC PTEN) and showed the most stable cross-site performance. Differences between canonical encoders were modest relative to task-specific variability. GOLDMARK establishes a shared experimental substrate for computational pathology, enabling reproducible benchmarking and direct comparison of methods across datasets and models.