Expressive Power of Implicit Models: Rich Equilibria and Test-Time Scaling
arXiv stat.ML / 4/1/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- Implicit models generate outputs by iterating a shared operator to a fixed point, effectively behaving like an infinite-depth, weight-tied network trained with constant memory.
- The paper addresses why implicit models can match or exceed explicit models by increasing test-time compute, providing a strict mathematical characterization of their expressive power.
- It proves that for a broad class of implicit operators, expressive power increases systematically with the number of test-time iterations, allowing the model to approach a richer function class.
- Experiments across image reconstruction, scientific computing, operations research, and LLM reasoning show that more iterations improve mapping complexity and solution quality while stabilizing performance.
Related Articles

Black Hat Asia
AI Business

Knowledge Governance For The Agentic Economy.
Dev.to

AI server farms heat up the neighborhood for miles around, paper finds
The Register

Paperclip: Công Cụ Miễn Phí Biến AI Thành Đội Phát Triển Phần Mềm
Dev.to
Does the Claude “leak” actually change anything in practice?
Reddit r/LocalLLaMA