Watts-per-Intelligence Part II: Algorithmic Catalysis

arXiv cs.AI / 4/25/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a thermodynamic framework for “algorithmic catalysis” within the watts-per-intelligence model, focusing on reusable computational structures that cut down irreversible operations for a task class.
  • It shows that class-specific speed-ups are theoretically limited by the algorithmic mutual information between the computational substrate and the class descriptor, linking performance to information content.
  • The authors argue that storing/using this class information requires a minimum thermodynamic cost, grounded in Landauer’s principle for erasing information.
  • By combining these results, the paper derives a “coupling theorem” that sets a lower bound on how long a catalyst must be used (deployment horizon) before the energy cost is worthwhile.
  • An example using an affine SAT problem class is provided to illustrate the framework and to connect learned systems to information–thermodynamic constraints on intelligent computation.

Abstract

We develop a thermodynamic theory of algorithmic catalysis within the watts-per-intelligence framework, identifying reusable computational structures that reduce irreversible operations for a task class while satisfying bounded restoration and structural selectivity constraints. We prove that any class-specific speed-up is upper-bounded by the algorithmic mutual information between the substrate and the class descriptor, and that installing this information incurs a minimum thermodynamic cost via Landauer erasure. Combining these results yields a coupling theorem that lower-bounds the deployment horizon required for a catalyst to be energetically favourable. The framework is illustrated on an affine SAT class and situates contemporary learned systems within a unified information-thermodynamic constraint on intelligent computation.