DataProphet: Demystifying Supervision Data Generalization in Multimodal LLMs

arXiv cs.CL / 3/23/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper questions whether intuitive similarity between training data and target benchmarks reliably predicts downstream gains in multimodal LLMs and finds it unreliable across 14 vision-language datasets.
  • It introduces DATAPROPHET, a training-free metric that combines multimodal perplexity, dataset similarity, and data diversity to rank supervision data.
  • Across 14 vision-language datasets and 7 tasks, the method shows that generalization depends more on the specific dataset than on broad task labels, and correlates with actual after-training gains (Kendall's tau = 86.0%).
  • DATAPROPHET-based data selection yields up to 6.9% improvement over uniform selection, 1.4% over a state-of-the-art training-based baseline, and 0.2% above oracle selection based on experimental performance.
  • The authors will release code and data to the public.

Abstract

Conventional wisdom for selecting supervision data for multimodal large language models (MLLMs) is to prioritize datasets that appear similar to the target benchmark, such as text-intensive or vision-centric tasks. However, it remains unclear whether such intuitive similarity reliably predicts downstream performance gains. In this work, we take a first step toward answering a practical question: can we estimate the influence of a training dataset on a target benchmark before any training is performed? To investigate this question, we conduct an in-depth analysis of transfer across 14 vision-language datasets spanning 7 diverse tasks. Our results show that intuitive task similarity is an unreliable predictor of transferability, and that generalization depends more on the specific dataset than on its broad task category. Motivated by this finding, we propose DATAPROPHET, a simple and effective training-free metric that combines multimodal perplexity, similarity, and data diversity. Experiments show that DATAPROPHET produces supervision-data rankings that strongly correlate with rankings based on actual post-training performance gains, achieving a Kendall's tau of 86.0%. Moreover, DATAPROPHET enables better supervision-data selection, yielding up to 6.9% improvement over uniform selection, 1.4% over a state-of-the-art training-based baseline, and 0.2% above oracle selection based on experimental performance. Our code and data will be released.