VLA-ATTC: Adaptive Test-Time Compute for VLA Models with Relative Action Critic Model
arXiv cs.RO / 5/5/2026
📰 NewsDeveloper Stack & InfrastructureModels & Research
Key Points
- The paper proposes VLA-ATTC, a framework that adds adaptive test-time compute to vision-language-action (VLA) models to enable more deliberative decisions when needed.
- It uses an uncertainty-based “cognitive clutch” to switch from fast reflexive execution to a test-time compute (TTC) deliberation phase for complex or ambiguous situations.
- During TTC, a new Relative Action Critic (RAC) model selects the best action among generated candidates using pairwise comparisons, reducing reliance on unstable absolute value estimation.
- The work also introduces efficient sampling to reduce compute overhead and an automated data pipeline that generates preference pairs without manual annotation.
- Experiments on the LIBERO-LONG benchmark show VLA-ATTC cuts the failure rate of the state-of-the-art PI0.5 model by more than 50%, and the authors plan to open-source code and weights.
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.
Related Articles
Singapore's Fraud Frontier: Why AI Scam Detection Demands Regulatory Precision
Dev.to

First experience with Building Apps with Google AI Studio: Incredibly simple and intuitive.
Dev.to
Meta will use AI to analyze height and bone structure to identify if users are underage
TechCrunch
13 CLAUDE.md Rules That Make AI Write Modern PHP (Not PHP 5 Resurrected)
Dev.to
Building an AI Image Generator SaaS in 2026: My Tech Stack and Lessons
Dev.to