A Multi-head-based architecture for effective morphological tagging in Russian with open dictionary

arXiv cs.CL / 4/6/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a new multi-head-attention architecture for morphological tagging in Russian, focusing on accurate prediction of grammatical categories.
  • It preprocesses words by splitting them into subtokens and then learns a procedure to aggregate subtoken vectors back into token-level representations, enabling the use of an open dictionary.
  • The approach supports analyzing morphological patterns from parts of words (e.g., prefixes and endings) and is designed to handle words not seen in the training dataset.
  • Experiments on the SinTagRus and Taiga datasets report very high accuracy (98–99% for some grammatical categories), outperforming previously known results.
  • The model is positioned as practical to train on consumer GPUs, avoids RNNs and large-scale unlabeled-text pretraining (unlike BERT-style workflows), and claims improved processing speed over prior work.

Abstract

The article proposes a new architecture based on Multi-head attention to solve the problem of morphological tagging for the Russian language. The preprocessing of the word vectors includes splitting the words into subtokens, followed by a trained procedure for aggregating the vectors of the subtokens into vectors for tokens. This allows to support an open dictionary and analyze morphological features taking into account parts of words (prefixes, endings, etc.). The open dictionary allows in future to analyze words that are absent in the training dataset. The performed computational experiment on the SinTagRus and Taiga datasets shows that for some grammatical categories the proposed architecture gives accuracy 98-99% and above, which outperforms previously known results. For nine out of ten words, the architecture precisely predicts all grammatical categories and indicates when the categories must not be analyzed for the word. At the same time, the model based on the proposed architecture can be trained on consumer-level graphics accelerators, retains all the advantages of Multi-head attention over RNNs (RNNs are not used in the proposed approach), does not require pretraining on large collections of unlabeled texts (like BERT), and shows higher processing speed than previous results.