Thinking Without Words: Efficient Latent Reasoning with Abstract Chain-of-Thought

arXiv cs.CL / 4/27/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes “Abstract Chain-of-Thought,” a post-training method that replaces long, explicit natural-language chains-of-thought with a short sequence of discrete latent “abstract” tokens drawn from a reserved vocabulary during inference.
  • It introduces a policy-iteration-style warm-up procedure that alternates between supervised fine-tuning from verbal CoT (via masking/bottlenecking) and self-distillation to generate abstract tokens from the prompt alone using constrained decoding.
  • After warm-up, the method uses warm-started reinforcement learning with constrained decoding to improve the generation of abstract reasoning sequences.
  • Experiments report up to 11.6× fewer reasoning tokens while maintaining comparable performance on math, instruction-following, and multi-hop reasoning, and the approach generalizes across different LLM families.
  • The authors observe an emergent power-law distribution over the abstract token vocabulary that changes across training phases, suggesting dynamics similar to those seen in natural language.

Abstract

While long, explicit chains-of-thought (CoT) have proven effective on complex reasoning tasks, they are costly to generate during inference. Non-verbal reasoning methods have emerged with shorter generation lengths by leveraging continuous representations, yet their performance lags behind verbalized CoT. We propose \textbf{Abstract Chain-of-Thought}, a discrete latent reasoning post-training mechanism in which the language model produces a short sequence of tokens from a reserved vocabulary in lieu of a natural language CoT, before generating a response. To make previously unseen ''abstract'' tokens useful, we introduce a policy iteration-style warm-up loop that alternates between (i.) bottlenecking from a verbal CoT via masking and performing supervised fine-tuning, and (ii.) self-distillation by training the model to generate abstract tokens from the prompt alone via constrained decoding with the codebook. After warm-up, we optimize the generation of abstract sequences with warm-started reinforcement learning under constrained decoding. Abstract-CoT achieves up to 11.6\times fewer reasoning tokens while demonstrating comparable performance across mathematical reasoning, instruction-following, and multi-hop reasoning, and generalizes across language model families. We also find an emergent power law distribution over the abstract vocabulary, akin to those seen in natural language, that evolves across the training phases. Our findings highlight the potential for post-training latent reasoning mechanisms that enable efficient inference through a learned abstract reasoning language.