C-voting: Confidence-Based Test-Time Voting without Explicit Energy Functions
arXiv cs.LG / 4/16/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces confidence-based voting (C-voting), a test-time scaling method for recurrent latent models that uses multiple candidate latent trajectories and selects the one maximizing the average of top-1 prediction probabilities as a confidence proxy.
- It reports that C-voting achieves 4.9% higher accuracy on Sudoku-hard than an energy-based voting strategy, highlighting improved performance over approaches that rely on explicit energy functions.
- A key contribution is that C-voting can be applied to recurrent models even when they do not have explicit energy functions, making it more broadly compatible with existing model designs.
- The authors propose an attention-based recurrent variant with randomized initial states (ItrSA++), and show that when combined with C-voting it outperforms HRM on Sudoku-extreme (95.2% vs. 55.0%) and Maze (78.6% vs. 74.5%).
Related Articles

Black Hat Asia
AI Business

Introducing Claude Opus 4.7
Anthropic News

AI traffic to US retailers rose 393% in Q1, and it’s boosting their revenue too
TechCrunch

Who Audits the Auditors? Building an LLM-as-a-Judge for Agentic Reliability
Dev.to

"Enterprise AI Cost Optimization: How Companies Are Cutting AI Infrastructure Sp
Dev.to