Konkani LLM: Multi-Script Instruction Tuning and Evaluation for a Low-Resource Indian Language

arXiv cs.CL / 3/26/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • The paper addresses why LLMs underperform for Konkani by attributing the gap to low training data availability and high script diversity across Devanagari, Romi, and Kannada.
  • It introduces “Konkani-Instruct-100k,” a synthetic instruction-tuning dataset generated via Gemini 3, aimed at improving Konkani instruction-following performance.
  • The authors create “Konkani LLM,” a set of fine-tuned models tailored to regional linguistic nuances, and evaluate them against both open-weight (Llama 3.1, Qwen2.5, Gemma 3) and closed-source proprietary models.
  • They develop the “Multi-Script Konkani Benchmark” to enable systematic evaluation across different orthographies rather than a single script.
  • In machine translation experiments, the Konkani LLM shows consistent improvements over base models and is competitive with, and sometimes surpasses, proprietary baselines.

Abstract

Large Language Models (LLMs) consistently under perform in low-resource linguistic contexts such as Konkani. This performance deficit stems from acute training data scarcity compounded by high script diversity across Devanagari, Romi and Kannada orthographies. To address this gap, we introduce Konkani-Instruct-100k, a comprehensive synthetic instruction-tuning dataset generated through Gemini 3. We establish rigorous baseline benchmarks by evaluating leading open-weights architectures including Llama 3.1, Qwen2.5 and Gemma 3 alongside proprietary closed-source models. Our primary contribution involves the development of Konkani LLM, a series of fine-tuned models optimized for regional nuances. Furthermore, we are developing the Multi-Script Konkani Benchmark to facilitate cross-script linguistic evaluation. In machine translation, Konkani LLM delivers consistent gains over the corresponding base models and is competitive with and in several settings surpasses proprietary baselines