MixAtlas: Uncertainty-aware Data Mixture Optimization for Multimodal LLM Midtraining
arXiv cs.LG / 4/17/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces MixAtlas, a new method for uncertainty-aware data-mixture optimization tailored to multimodal LLM midtraining, moving beyond prior single-axis mixture tuning.
- MixAtlas decomposes the training data along two dimensions—image concept clusters (10 via CLIP embeddings) and task supervision types (5 objectives such as captioning, OCR, grounding, detection, and VQA)—to build inspectable, adaptable training “recipes.”
- It uses small proxy models (Qwen2-0.5B) with a Gaussian-process surrogate and GP-UCB acquisition to search the mixture space under a proxy budget comparable to regression-based baselines.
- Experiments on 10 multimodal benchmarks show that optimized mixtures improve average performance by 8.5%-17.6% on Qwen2-7B and by 1.0%-3.3% on Qwen2.5-7B, while reaching baseline-equivalent training loss in up to 2× fewer steps.
- The discovered recipes transfer from 0.5B proxy settings to 7B-scale midtraining across Qwen model families, indicating practical reuse across model variants and corpora.


![[2026] OpenTelemetry for LLM Observability — Self-Hosted Setup](/_next/image?url=https%3A%2F%2Fmedia2.dev.to%2Fdynamic%2Fimage%2Fwidth%3D1200%2Cheight%3D627%2Cfit%3Dcover%2Cgravity%3Dauto%2Cformat%3Dauto%2Fhttps%253A%252F%252Fdev-to-uploads.s3.amazonaws.com%252Fuploads%252Farticles%252Flu4b6ttuhur71z5gemm0.png&w=3840&q=75)
