SLOW: Strategic Logical-inference Open Workspace for Cognitive Adaptation in AI Tutoring
arXiv cs.AI / 3/31/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that current LLM-based tutoring systems rely on single-pass generation, which limits deliberate reasoning and entangles learner diagnosis, emotional perception, and teaching decisions.
- It proposes SLOW, a theory-informed tutoring framework that creates a transparent reasoning workspace separating learner-state inference from instructional action selection.
- SLOW combines causal evidence parsing from learner language, fuzzy cognitive diagnosis with counterfactual stability analysis, and prospective affective reasoning to anticipate emotional impacts of teaching choices.
- Hybrid human-AI evaluations report improvements in personalization, emotional sensitivity, and instructional clarity, with ablation studies confirming each module’s necessity.
- The authors claim SLOW increases interpretability and educational validity for LLM-based adaptive instruction via a visualized, reliable decision-making process.
Related Articles

Black Hat Asia
AI Business
[D] How does distributed proof of work computing handle the coordination needs of neural network training?
Reddit r/MachineLearning

Claude Code's Entire Source Code Was Just Leaked via npm Source Maps — Here's What's Inside
Dev.to

BYOK is not just a pricing model: why it changes AI product trust
Dev.to

AI Citation Registries and Identity Persistence Across Records
Dev.to