MADE: A Living Benchmark for Multi-Label Text Classification with Uncertainty Quantification of Medical Device Adverse Events

arXiv cs.CL / 4/17/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • The paper introduces MADE, a “living” multi-label text classification benchmark for medical device adverse event reports that is continuously updated to reduce training-data contamination risks.
  • MADE is designed to address MLTC challenges including long-tailed hierarchical label distributions, label dependencies, and combinatorial complexity, while providing reproducible evaluations via strict temporal splits.
  • The authors report extensive baselines across 20+ encoder- and decoder-only models under fine-tuning and few-shot (instruction-tuned/reasoning) settings, including variants with local or API access.
  • They systematically compare uncertainty quantification approaches (entropy/consistency-based and self-verbalized methods) and find key trade-offs: generative fine-tuning yields the most reliable UQ, while large reasoning models help rare-label accuracy but can show weak UQ.
  • The study concludes that self-verbalized confidence is not a dependable proxy for true uncertainty and that smaller discriminatively fine-tuned decoders can balance strong head-to-tail accuracy with competitive UQ.

Abstract

Machine learning in high-stakes domains such as healthcare requires not only strong predictive performance but also reliable uncertainty quantification (UQ) to support human oversight. Multi-label text classification (MLTC) is a central task in this domain, yet remains challenging due to label imbalances, dependencies, and combinatorial complexity. Existing MLTC benchmarks are increasingly saturated and may be affected by training data contamination, making it difficult to distinguish genuine reasoning capabilities from memorization. We introduce MADE, a living MLTC benchmark derived from {m}edical device {ad}verse {e}vent reports and continuously updated with newly published reports to prevent contamination. MADE features a long-tailed distribution of hierarchical labels and enables reproducible evaluation with strict temporal splits. We establish baselines across more than 20 encoder- and decoder-only models under fine-tuning and few-shot settings (instruction-tuned/reasoning variants, local/API-accessible). We systematically assess entropy-/consistency-based and self-verbalized UQ methods. Results show clear trade-offs: smaller discriminatively fine-tuned decoders achieve the strongest head-to-tail accuracy while maintaining competitive UQ; generative fine-tuning delivers the most reliable UQ; large reasoning models improve performance on rare labels yet exhibit surprisingly weak UQ; and self-verbalized confidence is not a reliable proxy for uncertainty. Our work is publicly available at https://hhi.fraunhofer.de/aml-demonstrator/made-benchmark.