Learning, Potential, and Retention: An Approach for Evaluating Adaptive AI-Enabled Medical Devices

arXiv cs.AI / 4/7/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper tackles how to evaluate adaptive AI medical devices when both the model and evaluation datasets change over time, making it hard to attribute performance differences.
  • It proposes three metrics—learning, potential, and retention—to separate gains from model updates, dataset-driven effects, and degradation or preservation of knowledge across modification steps.
  • Case studies with simulated population shifts show that gradual transitions support more stable learning and retention, while rapid shifts surface trade-offs between plasticity and stability.
  • The approach is positioned as a practical framework for regulatory science to assess the safety and effectiveness of sequentially modified adaptive AI systems.

Abstract

This work addresses challenges in evaluating adaptive artificial intelligence (AI) models for medical devices, where iterative updates to both models and evaluation datasets complicate performance assessment. We introduce a novel approach with three complementary measurements: learning (model improvement on current data), potential (dataset-driven performance shifts), and retention (knowledge preservation across modification steps), to disentangle performance changes caused by model adaptations versus dynamic environments. Case studies using simulated population shifts demonstrate the approach's utility: gradual transitions enable stable learning and retention, while rapid shifts reveal trade-offs between plasticity and stability. These measurements provide practical insights for regulatory science, enabling rigorous assessment of the safety and effectiveness of adaptive AI systems over sequential modifications.