Shorthand for Thought: Compressing LLM Reasoning via Entropy-Guided Supertokens

arXiv cs.CL / 4/30/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies why LLM reasoning can be expensive at inference time by analyzing the token-level information structure of reasoning traces.
  • It finds reasoning tokens naturally separate into low-entropy “structural” tokens (recurring scaffolding phrases) and higher-entropy “organic” tokens (problem-specific content).
  • The authors propose a model-agnostic compression pipeline that derives “supertokens” via cross-word BPE merges on a model’s own reasoning traces and then teaches the model to use them through supervised fine-tuning.
  • Across three model families and five mathematical reasoning benchmarks, the method reduces reasoning trace length by an average of 8.1% without any statistically significant accuracy loss for any model–benchmark combination.
  • The learned supertokens also serve as interpretable annotations of reasoning moves and enable diagnostic insights (e.g., productive recovery vs confusion cycles), with potential applications for RL reward shaping and early stopping.

Abstract

Reasoning in Large Language Models incurs significant inference-time compute, yet the token-level information structure of reasoning traces remains underexplored. We observe that reasoning tokens split into two functional types: low-entropy \textit{structural} tokens (recurring phrases that scaffold the reasoning process) and higher-entropy \textit{organic} tokens (problem-specific content that drives toward a solution). This asymmetry motivates a simple, model-agnostic compression pipeline: apply cross-word BPE merges on a model's own reasoning traces to derive \textit{supertokens} that capture frequent structural patterns, then teach the model to adopt them via supervised fine-tuning. Across three model families and five mathematical reasoning benchmarks, our approach shortens reasoning traces by 8.1\% on average with no statistically significant accuracy loss on any model--benchmark pair. Beyond compression, supertokens act as interpretable reasoning-move annotations (backtracking, verification, strategy shifts), exposing the model's high-level strategy at a glance. Analyzing transitions between structural categories reveals systematic differences between correct and incorrect traces: correct traces show productive recovery (backtracking followed by strategy shifts and verification), while incorrect traces are dominated by confusion cycles (repeated hedging and unresolved contradictions). These diagnostic signals suggest applications in reward shaping and early stopping for RL-based reasoning training.