Exploring Data Augmentation and Resampling Strategies for Transformer-Based Models to Address Class Imbalance in AI Scoring of Scientific Explanations in NGSS Classroom

arXiv cs.LG / 4/23/2026

💬 OpinionIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper studies how data augmentation can improve transformer-based (SciBERT) text classification for automated rubric scoring of NGSS-aligned scientific explanations, where class imbalance is especially severe for advanced-reasoning categories.
  • Using a dataset of 1,466 high-school responses labeled across 11 binary-coded analytic rubric categories, the authors compare multiple augmentation methods against fine-tuning alone and a traditional oversampling baseline (SMOTE).
  • GPT-4–generated synthetic responses boost both precision and recall, ALP achieves perfect precision/recall/F1 for the most severely imbalanced categories, and EASE improves alignment with human scoring across both correct scientific ideas and inaccurate ideas.
  • Overall results suggest that targeted augmentation can mitigate severe imbalance without overfitting and while preserving coverage needed for learning-progression-aligned automated scoring at scale in science education.

Abstract

Automated scoring of students' scientific explanations offers the potential for immediate, accurate feedback, yet class imbalance in rubric categories particularly those capturing advanced reasoning remains a challenge. This study investigates augmentation strategies to improve transformer-based text classification of student responses to a physical science assessment based on an NGSS-aligned learning progression. The dataset consists of 1,466 high school responses scored on 11 binary-coded analytic categories. This rubric identifies six important components including scientific ideas needed for a complete explanation along with five common incomplete or inaccurate ideas. Using SciBERT as a baseline, we applied fine-tuning and test these augmentation strategies: (1) GPT-4--generated synthetic responses, (2) EASE, a word-level extraction and filtering approach, and (3) ALP (Augmentation using Lexicalized Probabilistic context-free grammar) phrase-level extraction. While fine-tuning SciBERT improved recall over baseline, augmentation substantially enhanced performance, with GPT data boosting both precision and recall, and ALP achieving perfect precision, recall, and F1 scores across most severe imbalanced categories (5,6,7 and 9). Across all rubric categories EASE augmentation substantially increased alignment with human scoring for both scientific ideas (Categories 1--6) and inaccurate ideas (Categories 7--11). We compared different augmentation strategies to a traditional oversampling method (SMOTE) in an effort to avoid overfitting and retain novice-level data critical for learning progression alignment. Findings demonstrate that targeted augmentation can address severe imbalance while preserving conceptual coverage, offering a scalable solution for automated learning progression-aligned scoring in science education.