Beyond Test-Time Compute Strategies: Advocating Energy-per-Token in LLM Inference

arXiv cs.CL / 2026/3/24

💬 オピニオンSignals & Early TrendsIdeas & Deep AnalysisModels & Research

要点

  • The paper argues that LLM inference can be inefficient because real-world tasks often require far less capability, allowing Small Language Models (SLMs) to achieve strong performance with lower compute.
  • It analyzes how test-time compute strategies such as Chain-of-Thought prompting and Majority Voting create an energy–accuracy trade-off by increasing reasoning compute even as they may reduce the need for larger models.
  • Using MMLU experiments and transformer input–output token dynamics, it shows that transformer hardware energy behavior is nonlinear, motivating inference methods that account for physical energy curves.
  • The authors propose energy-aware evaluation metrics—especially Energy-per-Token—to complement accuracy benchmarks, and suggest dynamically regulating reasoning depth during CoT generation.
  • The work envisions an energy-aware routing approach that jointly chooses the model and inference strategy to balance accuracy with sustainable AI deployment goals.

Abstract

Large Language Models (LLMs) demonstrate exceptional performance across diverse tasks but come with substantial energy and computational costs, particularly in request-heavy scenarios. In many real-world applications, the full scale and capabilities of LLMs are often unnecessary, as Small Language Models (SLMs) can provide accurate responses for simpler text generation tasks. When enhanced with advanced reasoning strategies, such as Chain-of-Thought (CoT) prompting or Majority Voting, SLMs can approach the performance of larger models while reducing overall computational requirements. However, these strategies can also introduce additional energy costs, creating an energy-accuracy trade-off. Our analysis examines these trade-offs in test-time compute strategies for smaller models compared to larger ones, using the MMLU benchmark. Additionally, we explore the input-output token dynamics of transformer architectures, which result in nonlinear hardware energy operation curves for LLMs. To bridge AI research with its physical impact, we propose \textit{energy efficiency metrics}, including Energy-per-Token, as complements to traditional accuracy benchmarks. Beyond model selection, we propose controlled reasoning in CoT token generation, using operating curves to regulate reasoning depth dynamically. This vision integrates a energy-aware routing mechanism, ensuring that model selection and inference strategies balance accuracy for sustainable AI deployment.

Beyond Test-Time Compute Strategies: Advocating Energy-per-Token in LLM Inference | AI Navigate