Enhancing Science Classroom Discourse Analysis through Joint Multi-Task Learning for Reasoning-Component Classification

arXiv cs.CL / 4/24/2026

📰 NewsTools & Practical UsageModels & Research

Key Points

  • The paper introduces ADAS, an automated system for analyzing science classroom discourse by jointly classifying utterance type and reasoning components for both teacher and student speech.
  • To handle strong label imbalance, the authors use stratified re-splitting of the dataset, LLM-based synthetic data augmentation focused on minority classes, and a dual-probe RoBERTa-base classifier.
  • They report that a zero-shot GPT-5.4 baseline reaches macro-F1 around 0.47 for utterance type (UT) and 0.476 for reasoning components (RC), providing upper bounds for prompt-only methods and motivation for fine-tuning.
  • Beyond classification, the study performs several discourse analyses (e.g., UTxRC co-occurrence, cognitive complexity, lag-sequential, and IRF chain analyses) and finds teacher “Feedback-with-Question (Fq)” moves are the most consistent antecedents of students’ inferential reasoning (SR-I).
  • The results suggest LLM augmentation improves minority-class recognition for UT, while the RC task’s structural simplicity makes it more tractable even for lexical baselines.

Abstract

Analyzing the reasoning patterns of students in science classrooms is critical for understanding knowledge construction mechanism and improving instructional practice to maximize cognitive engagement, yet manual coding of classroom discourse at scale remains prohibitively labor-intensive. We present an automated discourse analysis system (ADAS) that jointly classifies teacher and student utterances along two complementary dimensions: Utterance Type and Reasoning Component derived from our prior CDAT framework. To address severe label imbalance among minority classes, we (1) stratify-resplit the annotated corpus, (2) apply LLM-based synthetic data augmentation targeting minority classes, and (3) train a dual-probe head RoBERTa-base classifier. A zero-shot GPT-5.4 baseline achieves macro-F1 of 0.467 on UT and 0.476 on RC, establishing meaningful upper bounds for prompt-only approaches motivating fine-tuning. Beyond classification, we conduct discourse pattern analyses including UTxRC co-occurrence profiling, Cognitive Complexity Index (CCI) computation per session, lag-sequential analysis, and IRF chain analysis, revealing that teacher Feedback-with-Question (Fq) moves are the most consistent antecedents of student inferential reasoning (SR-I). Our results demonstrate that LLM-based augmentation meaningfully improves UT minority-class recognition, and that the structural simplicity of the RC task makes it tractable even for lexical baselines.