Entanglement as Memory: Mechanistic Interpretability of Quantum Language Models

arXiv cs.CL / 3/30/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies whether quantum language models use genuinely quantum resources by moving beyond endpoint metrics to mechanistic interpretability of learned memory strategies.
  • Using causal gate ablation, entanglement tracking, and density-matrix interchange interventions on a controlled long-range dependency task, the authors find that single-qubit quantum language models are exactly classically simulable and learn the same geometric strategy as classical baselines.
  • In contrast, two-qubit models with entangling gates learn a distinct strategy that encodes context in inter-qubit entanglement, supported by multiple causal tests (p < 0.0001, d = 0.89).
  • When run on real quantum hardware, the entanglement-based strategy fails under device noise, degrading toward chance, while the classical geometric strategy remains robust.
  • The results suggest a noise–expressivity tradeoff that determines which internal strategies survive deployment, and the work positions mechanistic interpretability as a tool for advancing the science of quantum language models.

Abstract

Quantum language models have shown competitive performance on sequential tasks, yet whether trained quantum circuits exploit genuinely quantum resources -- or merely embed classical computation in quantum hardware -- remains unknown. Prior work has evaluated these models through endpoint metrics alone, without examining the memory strategies they actually learn internally. We introduce the first mechanistic interpretability study of quantum language models, combining causal gate ablation, entanglement tracking, and density-matrix interchange interventions on a controlled long-range dependency task. We find that single-qubit models are exactly classically simulable and converge to the same geometric strategy as matched classical baselines, while two-qubit models with entangling gates learn a representationally distinct strategy that encodes context in inter-qubit entanglement -- confirmed by three independent causal tests (p < 0.0001, d = 0.89). On real quantum hardware, only the classical geometric strategy survives device noise; the entanglement strategy degrades to chance. These findings open mechanistic interpretability as a tool for the science of quantum language models and reveal a noise-expressivity tradeoff governing which learned strategies survive deployment.