Tora3: Trajectory-Guided Audio-Video Generation with Physical Coherence

arXiv cs.CV / 4/13/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Tora3, a trajectory-guided audio-video generation framework aimed at improving plausible motion–sound relationships, which prior methods often fail to align physically and temporally.
  • Tora3 uses object trajectories as a shared kinematic prior by jointly guiding visual motion and acoustic events through a trajectory-aligned video motion representation and a trajectory-driven kinematic-audio alignment module.
  • It proposes a hybrid flow matching strategy that preserves trajectory fidelity in trajectory-conditioned regions while keeping local coherence where trajectories are less constrained.
  • The authors curate PAV, a large-scale audio-video dataset focused on motion-relevant patterns with automatically extracted motion annotations to better support motion-aware training.
  • Experiments on strong open-source baselines indicate Tora3 improves motion realism, motion–sound synchronization, and overall audio-video generation quality.

Abstract

Audio-video (AV) generation has recently made strong progress in perceptual quality and multimodal coherence, yet generating content with plausible motion-sound relations remains challenging. Existing methods often produce object motions that are visually unstable and sounds that are only loosely aligned with salient motion or contact events, largely because they lack an explicit motion-aware structure shared by video and audio generation. We present Tora3, a trajectory-guided AV generation framework that improves physical coherence by using object trajectories as a shared kinematic prior. Rather than treating trajectories as a video-only control signal, Tora3 uses them to jointly guide visual motion and acoustic events. Specifically, we design a trajectory-aligned motion representation for video, a kinematic-audio alignment module driven by trajectory-derived second-order kinematic states, and a hybrid flow matching scheme that preserves trajectory fidelity in trajectory-conditioned regions while maintaining local coherence elsewhere. We further curate PAV, a large-scale AV dataset emphasizing motion-relevant patterns with automatically extracted motion annotations. Extensive experiments show that Tora3 improves motion realism, motion-sound synchronization, and overall AV generation quality over strong open-source baselines.