Bringing Up a Bilingual BabyLM: Investigating Multilingual Language Acquisition Using Small-Scale Models

arXiv cs.CL / 4/1/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates how children might acquire two languages simultaneously by using language-model training as a controlled proxy for multilingual exposure conditions.
  • Researchers generated matched 100M-word monolingual and bilingual datasets via synthetic data plus machine translation, reducing confounds common in correlational child studies.
  • GPT-2 models trained on different bilingual exposure regimes are evaluated on perplexity, grammaticality, and semantic knowledge across model scales.
  • Results show bilingual models learn each language comparably to monolingual performance in one language while also achieving strong capability in the second language.
  • The authors conclude there are no major in-principle disadvantages to bilingual input for an agnostic statistical learner, and exposure-regime differences do not strongly change outcomes.

Abstract

Multilingualism is incredibly common around the world, leading to many important theoretical and practical questions about how children learn multiple languages at once. For example, does multilingual acquisition lead to delays in learning? Are there better and worse ways to structure multilingual input? Many correlational studies address these questions, but it is surprisingly difficult to get definitive answers because children cannot be randomly assigned to be multilingual and data are typically not matched between languages. We use language model training as a method for simulating a variety of highly controlled exposure conditions, and create matched 100M-word mono- and bilingual datasets using synthetic data and machine translation. We train GPT-2 models on monolingual and bilingual data organized to reflect a range of exposure regimes, and evaluate their performance on perplexity, grammaticality, and semantic knowledge. Across model scales and measures, bilingual models perform similarly to monolingual models in one language, but show strong performance in the second language as well. These results suggest that there are no strong differences between different bilingual exposure regimes, and that bilingual input poses no in-principle challenges for agnostic statistical learners.