DiscreteRTC: Discrete Diffusion Policies are Natural Asynchronous Executors
arXiv cs.RO / 4/29/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that physical AI needs asynchronous execution (“thinking while acting”) because synchronous executors’ inter-chunk pauses are harmful for dynamic environments, even if inference is fast.
- It reviews real-time chunking (RTC) as an inpainting-style approach (freezing committed actions and generating the rest) and claims RTC using flow-matching policies is structurally suboptimal due to relying on inference-time corrections, which reduces pre-training benefit and increases computation and latency.
- The authors propose DiscreteRTC, using discrete diffusion policies that generate actions by iteratively unmasking, positioning them as a natural fit for asynchronous execution without extra external corrections.
- DiscreteRTC is presented as fine-tuning free for the inpainting behavior, with adaptive early stopping that lowers inference cost and improves execution success.
- Experiments on dynamic simulated benchmarks and real-world dynamic manipulation tasks reportedly show higher success rates than continuous RTC and other baselines, including a 50% higher success rate on a real-world dynamic pick task versus flow-matching-based RTC.
Related Articles
LLMs will be a commodity
Reddit r/artificial

Indian Developers: How to Build AI Side Income with $0 Capital in 2026
Dev.to

What it feels like to have to have Qwen 3.6 or Gemma 4 running locally
Reddit r/LocalLLaMA

Dex lands $5.3M to grow its AI-driven talent matching platform
Tech.eu

AI Citation Registry: Why Daily Updates Leave No Time for Data Structuring
Dev.to