Caption First, VQA Second: Knowledge Density, Not Task Format, Drives Multimodal Scaling

arXiv cs.AI / 4/16/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that multimodal model scaling is limited less by the variety of task formats (e.g., VQA) and more by the knowledge density and semantic coverage of the training data.
  • It shows that VQA supervision adds little incremental semantic information beyond what is already present in image captions, with VQA performance reconstructible from captions at negligible loss.
  • The authors report that enhancing knowledge density via methods like structured caption enrichment and cross-modal knowledge injection yields consistent gains across multimodal and downstream benchmarks.
  • Across controlled experiments, performance is found to correlate more strongly with semantic coverage than with task diversity, suggesting a data-knowledge bottleneck.
  • The work concludes that existing MLLMs struggle to scale because training data lacks sufficient knowledge coverage and proposes a knowledge-centric approach as a foundation for scalable multimodal training.

Abstract

Multimodal large language models (MLLMs) have achieved rapid progress, yet their scaling behavior remains less clearly characterized and often less predictable than that of text-only LLMs. Increasing model size and task diversity often yields diminishing returns. In this work, we argue that the primary bottleneck in multimodal scaling is not task format, but knowledge density in training data. We first show that task-specific supervision such as Visual Question Answering (VQA) contributes little incremental semantic information beyond image captions: VQA signals can be reconstructed from captions with negligible performance loss. We then demonstrate that increasing knowledge density -- through structured caption enrichment and cross-modal knowledge injection -- leads to consistent performance improvements across multimodal and downstream benchmarks. Across controlled experiments, performance correlates more strongly with semantic coverage than with task diversity. These findings suggest that current MLLMs fail to scale primarily because training data lacks sufficient knowledge coverage. We advocate for knowledge-centric multimodal training as a principled foundation for scalable multimodal models.