Tell Me Why: Designing an Explainable LLM-based Dialogue System for Student Problem Behavior Diagnosis

arXiv cs.AI / 4/27/2026

💬 OpinionIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper proposes an explainable, fine-tuned LLM-based dialogue system to help teachers diagnose student problem behavior by recommending categories and intervention strategies with supporting evidence.
  • It introduces a hierarchical attribution approach using explainable AI (xAI) to trace which parts of the dialogue inform each recommendation.
  • The system then converts the identified evidence into natural-language explanations to improve transparency.
  • Technical evaluations show improved performance over baseline methods at finding supporting evidence, and a preliminary user study (22 pre-service teachers) indicates higher reported trust when explanations are provided.

Abstract

Diagnosing student problem behaviors requires teachers to synthesize multifaceted information, identify behavioral categories, and plan intervention strategies. Although fine-tuned large language models (LLMs) can support this process through multi-turn dialogue, they rarely explain why a strategy is recommended, limiting transparency and teachers' trust. To address this issue, we present an explainable dialogue system built on a fine-tuned LLM. The system uses a hierarchical attribution method based on explainable AI (xAI) to identify dialogue evidence for each recommendation and generate a natural-language explanation based on that evidence. In technical evaluation, the method outperformed baseline approaches in identifying supporting evidence. In a preliminary user study with 22 pre-service teachers, participants who received explanations reported higher trust in the system. These findings suggest a promising direction for improving LLM explainability in educational dialogue systems.