CheXmix: Unified Generative Pretraining for Vision Language Models in Medical Imaging

arXiv cs.CV / 4/28/2026

📰 NewsDeveloper Stack & InfrastructureTools & Practical UsageModels & Research

Key Points

  • CheXmix proposes a unified early-fusion generative pretraining approach for vision-language medical imaging models, addressing limitations of CLIP+LLM projection layers used in many MLLM pipelines.
  • The method trains on large chest X-ray datasets paired with radiology reports and extends Chameleon’s autoregressive framework with a two-stage multimodal generative pretraining strategy that blends masked autoencoder strengths with MLLM training.
  • CheXmix is designed to support both discriminative and generative tasks at coarse and fine-grained levels, enabling flexible use across multiple chest X-ray problem types.
  • Reported evaluations show CheXmix outperforms other generative baselines by 6.0% across masking ratios, beats CheXagent by 8.6% AUROC at high image masking ratios on CheXpert, improves image inpainting by 51.0% versus text-only generators, and achieves substantially better radiology report generation (45% higher on the GREEN metric vs CheXagent).
  • The paper provides an open-source codebase at the linked GitHub repository, supporting replication and further research.

Abstract

Recent medical multimodal foundation models are built as multimodal LLMs (MLLMs) by connecting a CLIP-pretrained vision encoder to an LLM using LLaVA-style finetuning. This two-stage, decoupled approach introduces a projection layer that can distort visual features. This is especially concerning in medical imaging where subtle cues are essential for accurate diagnoses. In contrast, early-fusion generative approaches such as Chameleon eliminate the projection bottleneck by processing image and text tokens within a single unified sequence, enabling joint representation learning that leverages the inductive priors of language models. We present CheXmix, a unified early-fusion generative model trained on a large corpus of chest X-rays paired with radiology reports. We expand on Chameleon's autoregressive framework by introducing a two-stage multimodal generative pretraining strategy that combines the representational strengths of masked autoencoders with MLLMs. The resulting models are highly flexible, supporting both discriminative and generative tasks at both coarse and fine-grained scales. Our approach outperforms well-established generative models across all masking ratios by 6.0% and surpasses CheXagent by 8.6% on AUROC at high image masking ratios on the CheXpert classification task. We further inpaint images over 51.0% better than text-only generative models and outperform CheXagent by 45% on the GREEN metric for radiology report generation. These results demonstrate that CheXmix captures fine-grained information across a broad spectrum of chest X-ray tasks. Our code is at: https://github.com/StanfordMIMI/CheXmix.