MedQ-Engine: A Closed-Loop Data Engine for Evolving MLLMs in Medical Image Quality Assessment

arXiv cs.CV / 3/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • MedQ-Engine introduces a closed-loop pipeline that iteratively evaluates MLLMs for Med-IQA, discovers failure prototypes via data-driven clustering, and uses a million-image pool with prototype-based retrieval to guide annotation and fine-tuning.
  • The system uses an entropy-guided routing mechanism to triage annotations, reducing labeling cost while targeting model weaknesses.
  • In experiments on five medical imaging modalities, an 8B-parameter model equipped with MedQ-Engine surpasses GPT-4o by more than 13% and approaches human expert performance to 4.34%, with only 10K annotations and over 4x efficiency vs random sampling.
  • The approach addresses cost and adaptability challenges in descriptive medical model outputs, enabling self-improvement through progressive human-in-the-loop annotation and quality-assured fine-tuning.
  • The paper positions MedQ-Engine as a scalable framework for evolving MLLMs in clinical QA tasks, potentially accelerating deployment of AI in radiology and related fields.

Abstract

Medical image quality assessment (Med-IQA) is a prerequisite for clinical AI deployment, yet multimodal large language models (MLLMs) still fall substantially short of human experts, particularly when required to provide descriptive assessments with clinical reasoning beyond simple quality scores. However, improving them is hindered by the high cost of acquiring descriptive annotations and by the inability of one-time data collection to adapt to the model's evolving weaknesses. To address these challenges, we propose MedQ-Engine, a closed-loop data engine that iteratively evaluates the model to discover failure prototypes via data-driven clustering, explores a million-scale image pool using these prototypes as retrieval anchors with progressive human-in-the-loop annotation, and evolves through quality-assured fine-tuning, forming a self-improving cycle. Models are evaluated on complementary perception and description tasks. An entropy-guided routing mechanism triages annotations to minimize labeling cost. Experiments across five medical imaging modalities show that MedQ-Engine elevates an 8B-parameter model to surpass GPT-4o by over 13% and narrow the gap with human experts to only 4.34%, using only 10K annotations with more than 4x sample efficiency over random sampling.