Interpretable Traces, Unexpected Outcomes: Investigating the Disconnect in Trace-Based Knowledge Distillation

arXiv cs.CL / 4/20/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies whether chain-of-thought (CoT) traces used for reasoning-focused LLMs are truly semantically correct and understandable to end users.
  • In QA experiments, the researchers create training pairs where each question is always paired with the correct final answer but with either verifiably correct or incorrect intermediate trace sub-steps.
  • The results show that trace correctness is a weak predictor of final answer correctness, with correct traces producing correct solutions only 28% of the time, and incorrect traces not necessarily reducing accuracy.
  • Although fine-tuning on verbose DeepSeek R1-style traces achieves the best model performance, human evaluations rate these traces as the least interpretable and as imposing the highest cognitive load.
  • The authors argue that practitioners should separate the objectives for model supervision (accuracy) from the design of traces intended for user interpretation.

Abstract

Recent advances in reasoning-focused Large Language Models (LLMs) have introduced Chain-of-Thought (CoT) traces - intermediate reasoning steps generated before a final answer. These traces, as in DeepSeek R1, guide inference and train smaller models. A common but under-examined assumption is that these traces are both semantically correct and interpretable to end-users. While intermediate reasoning steps are believed to improve accuracy, we question whether they are actually valid and understandable. To isolate the effect of trace semantics, we design experiments in Question Answering (QA) using rule-based problem decomposition, creating fine-tuning datasets where each problem is paired with either verifiably correct or incorrect traces, while always providing the correct final answer. Trace correctness is evaluated by checking the accuracy of every reasoning sub-step. To assess interpretability, we fine-tune LLMs on three additional trace types: R1 traces, R1 trace summaries, and post-hoc explanations, and conduct a human study with 100 participants rating each type on a Likert scale. We find: (1) Trace correctness does not reliably predict correct final answers - correct traces led to correct solutions in only 28% of test cases, while incorrect traces did not consistently degrade accuracy. (2) Fine-tuning on verbose R1 traces yielded the best model performance, but users rated them least interpretable (3.39 interpretability, 4.59 cognitive load on a 5-point scale), whereas more interpretable decomposed traces did not achieve comparable accuracy. Together, these findings challenge the assumption in question suggesting that researchers and practitioners should decouple model supervision objectives from end-user-facing trace design.