Circuit Complexity of Hierarchical Knowledge Tracing and Implications for Log-Precision Transformers

arXiv cs.LG / 3/26/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies knowledge tracing when concepts are organized in hierarchical prerequisite structures, analyzing what transformer-style computation can or cannot provably do using circuit complexity theory.
  • It formalizes recursive-majority mastery propagation on prerequisite trees and shows it is in $mathsf{NC}^1$ unconditionally using $O(\log n)$-depth, bounded-fanin circuits.
  • The authors argue that separating recursive-majority propagation from uniform $\mathsf{TC}^0$ would require major breakthroughs on existing open lower bounds in circuit complexity.
  • Under a monotonicity restriction, the study derives an unconditional barrier showing that alternating ALL/ANY prerequisite trees induce a strict depth hierarchy for monotone threshold circuits.
  • Empirically, transformer encoders tend to learn permutation-invariant shortcut solutions on recursive-majority trees, but intermediate-subtree auxiliary supervision elicits structure-dependent computation and attains near-perfect accuracy at depth 3--4, motivating structure-aware training objectives.

Abstract

Knowledge tracing models mastery over interconnected concepts, often organized by prerequisites. We analyze hierarchical prerequisite propagation through a circuit-complexity lens to clarify what is provable about transformer-style computation on deep concept hierarchies. Using recent results that log-precision transformers lie in logspace-uniform \mathsf{TC}^0, we formalize prerequisite-tree tasks including recursive-majority mastery propagation. Unconditionally, recursive-majority propagation lies in \mathsf{NC}^1 via O(\log n)-depth bounded-fanin circuits, while separating it from uniform \mathsf{TC}^0 would require major progress on open lower bounds. Under a monotonicity restriction, we obtain an unconditional barrier: alternating ALL/ANY prerequisite trees yield a strict depth hierarchy for \emph{monotone} threshold circuits. Empirically, transformer encoders trained on recursive-majority trees converge to permutation-invariant shortcuts; explicit structure alone does not prevent this, but auxiliary supervision on intermediate subtrees elicits structure-dependent computation and achieves near-perfect accuracy at depths 3--4. These findings motivate structure-aware objectives and iterative mechanisms for prerequisite-sensitive knowledge tracing on deep hierarchies.