CircuitProbe: Predicting Reasoning Circuits in Transformers via Stability Zone Detection

arXiv cs.AI / 4/2/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces CircuitProbe, a method to predict “reasoning circuits” in transformer models from activation statistics in under 5 minutes on CPU, replacing costly brute-force sweeps that take about 25 GPU hours per model.
  • CircuitProbe distinguishes two types of reasoning circuits: early-layer “stability circuits” detected via the derivative of representation change, and late-layer “magnitude circuits” identified using anomaly scoring.
  • Across 9 models spanning 6 architectures (including 2025 models), the authors report that CircuitProbe’s top predicted circuit locations match the optimal circuit or are within 2 layers in all validated cases.
  • A scaling study on the Qwen 2.5 family suggests duplicating the detected circuit consistently improves performance for models under 3B parameters, but degrades performance for 7B+ models, indicating a practical technique mainly for smaller LLMs.
  • The method is data-efficient (as few as 10 calibration examples) and shows stable predictions across multiple languages (English, Hindi, Chinese, and French).

Abstract

Transformer language models contain localized reasoning circuits, contiguous layer blocks that improve reasoning when duplicated at inference time. Finding these circuits currently requires brute-force sweeps costing 25 GPU hours per model. We propose CircuitProbe, which predicts circuit locations from activation statistics in under 5 minutes on CPU, providing a speedup of three to four orders of magnitude. We find that reasoning circuits come in two types: stability circuits in early layers, detected through the derivative of representation change, and magnitude circuits in late layers, detected through anomaly scoring. We validate across 9 models spanning 6 architectures, including 2025 models, confirming that CircuitProbe top predictions match or are within 2 layers of the optimal circuit in all validated cases. A scaling experiment across the Qwen 2.5 family reveals that layer duplication consistently benefits models under 3B parameters but degrades performance in 7B+ models, making this a practical scaling technique for small language models. CircuitProbe requires as few as 10 calibration examples and its predictions are stable across English, Hindi, Chinese, and French.