M-MiniGPT4: Multilingual VLLM Alignment via Translated Data

arXiv cs.CL / 4/1/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces M-MiniGPT4, a multilingual vision-language LLM designed to provide strong VLU performance across 11 languages using the MiniGPT4 architecture as a base.
  • It improves multilingual capability by combining native multilingual training data with translated data and adds a dedicated multilingual alignment stage using parallel text corpora.
  • The model reaches 36% accuracy on the multilingual MMMU benchmark, reporting better performance than prior state-of-the-art systems in the same parameter/weight class.
  • The authors open-source the models, code, and translated datasets to support further work on low-resource and multilingual vision-language research.

Abstract

This paper presents a Multilingual Vision Large Language Model, named M-MiniGPT4. Our model exhibits strong vision-language understanding (VLU) capabilities across 11 languages. We utilize a mixture of native multilingual and translated data to push the multilingual VLU performance of the MiniGPT4 architecture. In addition, we propose a multilingual alignment training stage that uses parallel text corpora to further enhance the multilingual capabilities of our model. M-MiniGPT4 achieves 36% accuracy on the multilingual MMMU benchmark, outperforming state-of-the-art models in the same weight class, including foundation models released after the majority of this work was completed. We open-source our models, code, and translated datasets to facilitate future research in low-resource and multilingual settings.