SLOW: Strategic Logical-inference Open Workspace for Cognitive Adaptation in AI Tutoring

arXiv cs.AI / 3/31/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that current LLM-based tutoring systems rely on single-pass generation, which limits deliberate reasoning and entangles learner diagnosis, emotional perception, and teaching decisions.
  • It proposes SLOW, a theory-informed tutoring framework that creates a transparent reasoning workspace separating learner-state inference from instructional action selection.
  • SLOW combines causal evidence parsing from learner language, fuzzy cognitive diagnosis with counterfactual stability analysis, and prospective affective reasoning to anticipate emotional impacts of teaching choices.
  • Hybrid human-AI evaluations report improvements in personalization, emotional sensitivity, and instructional clarity, with ablation studies confirming each module’s necessity.
  • The authors claim SLOW increases interpretability and educational validity for LLM-based adaptive instruction via a visualized, reliable decision-making process.

Abstract

While Large Language Models (LLMs) have demonstrated remarkable fluency in educational dialogues, most generative tutors primarily operate through intuitive, single-pass generation. This reliance on fast thinking precludes a dedicated reasoning workspace, forcing multiple diagnostic and strategic signals to be processed in a conflated manner. As a result, learner cognitive diagnosis, affective perception, and pedagogical decision-making become tightly entangled, which limits the tutoring system's capacity for deliberate instructional adaptation. We propose SLOW, a theory-informed tutoring framework that supports deliberate learner-state reasoning within a transparent decision workspace. Inspired by dual-process accounts of human tutoring, SLOW explicitly separates learner-state inference from instructional action selection. The framework integrates causal evidence parsing from learner language, fuzzy cognitive diagnosis with counterfactual stability analysis, and prospective affective reasoning to anticipate how instructional choices may influence learners' emotional trajectories. These signals are jointly considered to guide pedagogically and affectively aligned tutoring strategies. Evaluation using hybrid human-AI judgments demonstrates significant improvements in personalization, emotional sensitivity, and clarity. Ablation studies further confirm the necessity of each module, showcasing how SLOW enables interpretable and reliable intelligent tutoring through a visualized decision-making process. This work advances the interpretability and educational validity of LLM-based adaptive instruction.