LGSE: Lexically Grounded Subword Embedding Initialization for Low-Resource Language Adaptation

arXiv cs.CL / 3/25/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces LGSE, a framework for lexically (morphologically) grounded initialization of embeddings to better adapt pretrained language models to low-resource, morphologically rich languages like Amharic and Tigrinya.
  • Unlike typical vocabulary expansion approaches that use arbitrarily segmented subwords and can fragment lexical representations, LGSE builds embeddings by decomposing words into morphemes and averaging pretrained subword/FastText-based morpheme representations.
  • If meaningful morpheme segmentation is unavailable, LGSE falls back to character n-gram representations to capture structural information for unseen or difficult tokens.
  • During language-adaptive pretraining, LGSE adds a regularization term that discourages large deviations from the initialized embeddings, helping preserve alignment with the original pretrained embedding space while still allowing adaptation.
  • Experiments on QA, NER, and text classification show LGSE outperforms baseline vocabulary expansion and adaptation methods across tasks, and the authors provide project resources via GitHub.

Abstract

Adapting pretrained language models to low-resource, morphologically rich languages remains a significant challenge. Existing vocabulary expansion methods typically rely on arbitrarily segmented subword units, resulting in fragmented lexical representations and loss of critical morphological information. To address this limitation, we propose the Lexically Grounded Subword Embedding Initialization (LGSE) framework, which introduces morphologically informed segmentation for initializing embeddings of novel tokens. Instead of using random vectors or arbitrary subwords, LGSE decomposes words into their constituent morphemes and constructs semantically coherent embeddings by averaging pretrained subword or FastText-based morpheme representations. When a token cannot be segmented into meaningful morphemes, its embedding is constructed using character n-gram representations to capture structural information. During Language-Adaptive Pretraining, we apply a regularization term that penalizes large deviations of newly introduced embeddings from their initialized values, preserving alignment with the original pretrained embedding space while enabling adaptation to the target language. To isolate the effect of initialization, we retain the original pre-trained model vocabulary and tokenizer and update only the new embeddings during adaptation. We evaluate LGSE on three NLP tasks: Question Answering, Named Entity Recognition, and Text Classification, in two morphologically rich, low-resource languages: Amharic and Tigrinya, where morphological segmentation resources are available. Experimental results show that LGSE consistently outperforms baseline methods across all tasks, demonstrating the effectiveness of morphologically grounded embedding initialization for improving representation quality in underrepresented languages. Project resources are available in the GitHub link.