Temperature-Dependent Performance of Prompting Strategies in Extended Reasoning Large Language Models
arXiv cs.AI / 4/13/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper evaluates how sampling temperature and prompting strategy interact in extended reasoning LLMs, focusing on chain-of-thought versus zero-shot prompting.
- Using Grok-4.1 with extended reasoning on 39 AMO-Bench (IMO-level) math problems, zero-shot prompting peaks at moderate temperatures (59% accuracy at T=0.4 and T=0.7).
- In contrast, chain-of-thought prompting yields its best results at the temperature extremes (T=0.0 and T=1.0).
- The study finds that the advantage of extended reasoning grows substantially with temperature, rising from 6x speed/accuracy benefit at T=0.0 to 14.3x at T=1.0.
- Overall, the results argue that temperature should be tuned jointly with prompting strategy rather than defaulting to T=0 for reasoning tasks.
Related Articles

Black Hat Asia
AI Business

Apple is building smart glasses without a display to serve as an AI wearable
THE DECODER

Why Fashion Trend Prediction Isn’t Enough Without Generative AI
Dev.to

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Chatbot vs Voicebot: The Real Business Decision Nobody Talks About
Dev.to