DOSE: Data Selection for Multi-Modal LLMs via Off-the-Shelf Models

arXiv cs.CV / 4/21/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper argues that multimodal (vision-language) training data often suffers from noise, redundancy, and poor alignment, which limits VLM improvements.
  • It introduces DOSE, a method that uses off-the-shelf pretrained models that have never seen the target data to score and select candidate samples without any task-specific training or fine-tuning.
  • DOSE evaluates sample text quality and image-text alignment, then builds a joint quality–alignment distribution and applies adaptive weighted sampling to choose informative data while preserving long-tail diversity.
  • Experiments on VQA and math benchmarks show that models trained on DOSE-filtered data can match or outperform models trained on the full dataset, while improving efficiency and scalability.
  • The work suggests that reusing existing pretrained models for data curation can reduce the extra compute cost typically required by conventional filtering pipelines.

Abstract

High-quality and diverse multimodal data are essential for improving vision-language models (VLMs), yet existing datasets often contain noisy, redundant, and poorly aligned samples. To address these problems, data filtering is commonly used to enhance the efficiency and performance of multimodal learning, but it introduces extra computational cost because filtering models are usually trained on the same data they are meant to screen. To reduce this cost, we study DOSE, which explores whether off-the-shelf pretrained models that have never seen the target data can be used to select training samples for larger and stronger multimodal models without any task-specific training. Even without fine-tuning, these models can effectively assess text quality and image-text alignment to guide data selection. Based on this, we build a joint quality-alignment distribution and apply adaptive weighted sampling to select informative samples while maintaining long-tail diversity. This approach enhances data diversity, enabling models trained on DOSE-filtered data to match or surpass those trained on the full dataset on standard VQA and math benchmarks. Extensive experiments demonstrate its effectiveness, efficiency, and scalability.