Hybrid CNN-Transformer Architecture for Arabic Speech Emotion Recognition

arXiv cs.CL / 4/10/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses the limited research and data availability for Arabic speech emotion recognition by introducing a dedicated Arabic SER approach.
  • It proposes a hybrid CNN-Transformer model that uses convolutional layers on Mel-spectrograms for discriminative feature extraction and Transformer encoders to model long-range temporal dependencies.
  • Experiments on the EYASE (Egyptian Arabic speech emotion) corpus report strong performance (97.8% accuracy) with a macro F1-score of 0.98.
  • The authors conclude that CNN+attention-based modeling is effective for Arabic SER and suggest Transformer-based methods can be promising even in low-resource language settings.
  • By targeting Arabic with improved results, the work highlights a path for building more human-centered emotion-aware applications in underrepresented languages.

Abstract

Recognizing emotions from speech using machine learning has become an active research area due to its importance in building human-centered applications. However, while many studies have been conducted in English, German, and other European and Asian languages, research in Arabic remains scarce because of the limited availability of annotated datasets. In this paper, we present an Arabic Speech Emotion Recognition (SER) system based on a hybrid CNN-Transformer architecture. The model leverages convolutional layers to extract discriminative spectral features from Mel-spectrogram inputs and Transformer encoders to capture long-range temporal dependencies in speech. Experiments were conducted on the EYASE (Egyptian Arabic speech emotion) corpus, and the proposed model achieved 97.8% accuracy and a macro F1-score of 0.98. These results demonstrate the effectiveness of combining convolutional feature extraction with attention-based modeling for Arabic SER and highlight the potential of Transformer-based approaches in low-resource languages.