Abjad-Kids: An Arabic Speech Classification Dataset for Primary Education

arXiv cs.CL / 3/24/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • The paper introduces Abjad-Kids, a new publicly planned Arabic speech dataset for kindergarten and primary education that targets learning alphabets, numbers, and colors for children aged 3–12.
  • The dataset includes 46,397 controlled-recording audio samples spanning 141 classes, with standardized duration, sampling rate, and format to improve research consistency.
  • To handle high similarity among Arabic phonemes and limited per-class data, the authors propose a hierarchical CNN-LSTM audio classification approach that uses two-stage grouping plus specialized classifiers.
  • Experiments show that static linguistic-based grouping performs better than dynamic clustering-based grouping, and that CNN-LSTM models with data augmentation outperform traditional baselines and other deep learning approaches.
  • The study reports persistent overfitting challenges despite augmentation and regularization, indicating that future expansion of the dataset is needed.

Abstract

Speech-based AI educational applications have gained significant interest in recent years, particularly for children. However, children speech research remains limited due to the lack of publicly available datasets, especially for low-resource languages such as Arabic.This paper presents Abjad-Kids, an Arabic speech dataset designed for kindergarten and primary education, focusing on fundamental learning of alphabets, numbers, and colors. The dataset consists of 46397 audio samples collected from children aged 3 - 12 years, covering 141 classes. All samples were recorded under controlled specifications to ensure consistency in duration, sampling rate, and format. To address high intra-class similarity among Arabic phonemes and the limited samples per class, we propose a hierarchical audio classification based on CNN-LSTM architectures. Our proposed methodology decomposes alphabet recognition into a two-stage process: an initial grouping classification model followed by specialized classifiers for each group. Both strategies: static linguistic-based grouping and dynamic clustering-based grouping, were evaluated. Experimental results demonstrate that static linguistic-based grouping achieves superior performance. Comparisons between traditional machine learning with deep learning approaches, highlight the effectiveness of CNN-LSTM models combined with data augmentation. Despite achieving promising results, most of our experiments indicate a challenge with overfitting, which is likely due to the limited number of samples, even after data augmentation and model regularization. Thus, future work may focus on collecting additional data to address this issue. Abjad-Kids will be publicly available. We hope that Abjad-Kids enrich children representation in speech dataset, and be a good resource for future research in Arabic speech classification for kids.