Chitrakshara: A Large Multilingual Multimodal Dataset for Indian languages

arXiv cs.CL / 3/26/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Chitrakshara, a new large multilingual multimodal dataset aimed at improving Vision-Language Model coverage for Indian languages rather than English-centric training data.
  • It presents two dataset releases: Chitrakshara-IL with 193M images, 30B text tokens, and 50M multilingual documents, and Chitrakshara-Cap with 44M image-text pairs and 733M tokens.
  • The dataset spans 11 Indian languages sourced from Common Crawl, with the authors describing a detailed data collection pipeline including curation, filtering, and processing steps.
  • The work includes a quality and diversity analysis to evaluate how representative and varied the dataset is across Indic languages, supporting the goal of more culturally inclusive VLMs.

Abstract

Multimodal research has predominantly focused on single-image reasoning, with limited exploration of multi-image scenarios. Recent models have sought to enhance multi-image understanding through large-scale pretraining on interleaved image-text datasets. However, most Vision-Language Models (VLMs) are trained primarily on English datasets, leading to inadequate representation of Indian languages. To address this gap, we introduce the Chitrakshara dataset series, covering 11 Indian languages sourced from Common Crawl. It comprises (1) Chitrakshara-IL, a large-scale interleaved pretraining dataset with 193M images, 30B text tokens, and 50M multilingual documents, and (2) Chitrakshara-Cap, which includes 44M image-text pairs with 733M tokens. This paper details the data collection pipeline, including curation, filtering, and processing methodologies. Additionally, we present a comprehensive quality and diversity analysis to assess the dataset's representativeness across Indic languages and its potential for developing more culturally inclusive VLMs.