Automatic Reflection Level Classification in Hungarian Student Essays

arXiv cs.CL / 5/5/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper presents the first comprehensive study on automatically classifying reflection levels in Hungarian student essays using expert-annotated data.
  • A dataset of 1,954 reflective essays labeled on a four-level reflection scale is introduced and used to compare two modeling approaches: classical ML with TF-IDF/embeddings and fine-tuned Hungarian transformer models.
  • To handle strong class imbalance, the study systematically evaluates class weighting, oversampling, data augmentation, and alternative loss functions, supported by an extensive ablation analysis.
  • Results show shallow classical models with feature engineering reach up to 71% overall performance across accuracy, F1-score, and ROC AUC, while transformers achieve 68% overall but generalize better for minority classes.

Abstract

Reflective thinking is a key competency in education, but assessing reflective writing remains a time-consuming and subjective task for education experts. While automated reflective analysis has been explored in several languages, Hungarian language was not researched extensively. In this paper, we present the first comprehensive study on automatic reflection level classification in Hungarian student essays. We used a large, expert-annotated Hungarian dataset consisting of 1,954 reflective essays collected over multiple academic years and labeled on a four-level reflection scale. We investigate two approaches: (1) classical machine learning models using TF-IDF and semantic embedding features, and (2) Hungarian-specific transformer models fine-tuned for document-level reflection classification. To address the strong class imbalance in the dataset, we systematically examine class weighting, oversampling, data augmentation, and alternative loss functions. An extensive ablation study is conducted to analyze the contribution of each modeling and balancing strategy. Our results show that shallow machine learning models with appropriate feature engineering achieve strong overall performance, reaching up to 71% overall score averaged over accuracy, F1-score, and ROC AUC metrics, while transformer-based models achieve slightly lower overall score (68%) averaged over the same metrics, but demonstrate better generalization on minority reflection classes. These findings highlight the continued relevance of classical methods for low-resource settings and the robustness of transformer models for imbalanced classification. The proposed dataset and experimental insights provide a solid foundation for future research on automated reflective analysis in Hungarian and other morphologically rich languages.