Soft MPCritic: Amortized Model Predictive Value Iteration
arXiv cs.LG / 4/3/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces “soft MPCritic,” an RL-MPC framework that learns in a soft value space while using sample-based planning for both online control and generating value targets.
- It implements MPC via MPPI (model predictive path integral control) and learns a terminal Q-function using fitted value iteration to better align the learned value function with the planner and effectively extend planning horizons.
- The approach adds an amortized warm-start method that reuses previously planned open-loop action sequences from online observations to compute batched MPPI-based value targets more efficiently.
- Soft MPCritic uses scenario-based planning with an ensemble of learned dynamics models for next-step prediction accuracy, enabling robust learning from short-horizon planning.
- The authors argue the combination of value-learning, planner-aligned targets, and amortization makes the framework computationally practical and scalable for both simple and complex control tasks.
Related Articles

Black Hat Asia
AI Business

90000 Tech Workers Got Fired This Year and Everyone Is Blaming AI but Thats Not the Whole Story
Dev.to

Microsoft’s $10 Billion Japan Bet Shows the Next AI Battleground Is National Infrastructure
Dev.to

TII Releases Falcon Perception: A 0.6B-Parameter Early-Fusion Transformer for Open-Vocabulary Grounding and Segmentation from Natural Language Prompts
MarkTechPost

Portable eye scanner powered by AI expands access to low-cost community screening
Reddit r/artificial