MixAtlas: Uncertainty-aware Data Mixture Optimization for Multimodal LLM Midtraining

arXiv cs.LG / 4/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces MixAtlas, a new method for uncertainty-aware data-mixture optimization tailored to multimodal LLM midtraining, moving beyond prior single-axis mixture tuning.
  • MixAtlas decomposes the training data along two dimensions—image concept clusters (10 via CLIP embeddings) and task supervision types (5 objectives such as captioning, OCR, grounding, detection, and VQA)—to build inspectable, adaptable training “recipes.”
  • It uses small proxy models (Qwen2-0.5B) with a Gaussian-process surrogate and GP-UCB acquisition to search the mixture space under a proxy budget comparable to regression-based baselines.
  • Experiments on 10 multimodal benchmarks show that optimized mixtures improve average performance by 8.5%-17.6% on Qwen2-7B and by 1.0%-3.3% on Qwen2.5-7B, while reaching baseline-equivalent training loss in up to 2× fewer steps.
  • The discovered recipes transfer from 0.5B proxy settings to 7B-scale midtraining across Qwen model families, indicating practical reuse across model variants and corpora.

Abstract

Domain reweighting can improve sample efficiency and downstream generalization, but data-mixture optimization for multimodal midtraining remains largely unexplored. Current multimodal training recipes tune mixtures along a single dimension, typically data format or task type. We introduce MixAtlas, a method that produces benchmark-targeted data recipes that can be inspected, adapted, and transferred to new corpora. MixAtlas decomposes the training corpus along two axes: image concepts (10 visual-domain clusters discovered via CLIP embeddings) and task supervision (5 objective types including captioning, OCR, grounding, detection, and VQA). Using small proxy models (Qwen2-0.5B) paired with a Gaussian-process surrogate and GP-UCB acquisition, MixAtlas searches the resulting mixture space with the same proxy budget as regression-based baselines but finds better-performing mixtures. We evaluate on 10 benchmarks spanning visual understanding, document reasoning, and multimodal reasoning. On Qwen2-7B, optimized mixtures improve average performance by 8.5%-17.6% over the strongest baseline; on Qwen2.5-7B, gains are 1.0%-3.3%. Both settings reach baseline-equivalent training loss in up to 2 times fewer steps. Recipes discovered on 0.5B proxies transfer to 7B-scale training across Qwen model families.