Multimodal Diffusion Forcing for Forceful Manipulation

arXiv cs.RO / 4/14/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that standard imitation learning often ignores how different modalities (observations, actions, rewards/force signals) interact, which limits modeling of contact-rich robot behavior.
  • It introduces Multimodal Diffusion Forcing (MDF), a diffusion-based learning framework that uses random partial masking and reconstruction rather than learning a single fixed observation-to-action mapping.
  • MDF is designed to capture temporal and cross-modal dependencies, enabling abilities such as predicting how actions affect force signals and inferring latent states from incomplete observations.
  • Experiments on forceful manipulation tasks in both simulation and real-world settings show strong performance and robustness to noisy observations, indicating practical value for manipulation under uncertainty.
  • The work positions MDF as a unified approach that goes beyond action generation by learning richer trajectory structure from multimodal expert demonstrations.

Abstract

Given a dataset of expert trajectories, standard imitation learning approaches typically learn a direct mapping from observations (e.g., RGB images) to actions. However, such methods often overlook the rich interplay between different modalities, i.e., sensory inputs, actions, and rewards, which is crucial for modeling robot behavior and understanding task outcomes. In this work, we propose Multimodal Diffusion Forcing, a unified framework for learning from multimodal robot trajectories that extends beyond action generation. Rather than modeling a fixed distribution, MDF applies random partial masking and trains a diffusion model to reconstruct the trajectory. This training objective encourages the model to learn temporal and cross-modal dependencies, such as predicting the effects of actions on force signals or inferring states from partial observations. We evaluate MDF on contact-rich, forceful manipulation tasks in simulated and real-world environments. Our results show that MDF not only delivers versatile functionalities, but also achieves strong performance, and robustness under noisy observations. More visualizations can be found on our \href{https://unified-df.github.io}{website}.