Benchmarking Linguistic Adaptation in Comparable-Sized LLMs: A Study of Llama-3.1-8B, Mistral-7B-v0.1, and Qwen3-8B on Romanized Nepali
arXiv cs.CL / 4/17/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The study benchmarks linguistic adaptation for Romanized Nepali using three similar open-weight LLMs—Llama-3.1-8B, Mistral-7B-v0.1, and Qwen3-8B—under both zero-shot and fine-tuned (SFT) conditions.
- Across multiple evaluation metrics (PPL, BERTScore, chrF++, ROUGE-1/2/L, and BLEU), all models fail to generate Romanized Nepali in zero-shot mode, each showing different, architecture-specific failure patterns.
- After fine-tuning with QLoRA using rsLoRA (r=32) while training only ~1% of parameters for under 27 GPU-hours, all models substantially improve and converge toward BERTScore ≈ 0.75 and chrF++ > 23.
- Dimension-wise results recommend Qwen3-8B overall, as it is the only model producing semantically relevant Romanized Nepali output in zero-shot and it leads structural alignment metrics after SFT.
- The authors confirm an “adaptation headroom” effect: Llama-3.1-8B, though weakest in zero-shot, delivers the largest absolute fine-tuning gains in PPL (Δ=-49.77) and BERTScore (Δ=+0.3287), making it attractive for iterative low-resource development.


![[2026] OpenTelemetry for LLM Observability — Self-Hosted Setup](/_next/image?url=https%3A%2F%2Fmedia2.dev.to%2Fdynamic%2Fimage%2Fwidth%3D1200%2Cheight%3D627%2Cfit%3Dcover%2Cgravity%3Dauto%2Cformat%3Dauto%2Fhttps%253A%252F%252Fdev-to-uploads.s3.amazonaws.com%252Fuploads%252Farticles%252Flu4b6ttuhur71z5gemm0.png&w=3840&q=75)
