ViHOI: Human-Object Interaction Synthesis with Visual Priors

arXiv cs.CV / 3/26/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces ViHOI, a diffusion-based framework for generating realistic and physically plausible 3D human-object interactions by extracting interaction “priors” from 2D images rather than relying on text-only constraints.
  • It uses a large vision-language model (VLM) to extract visual priors and applies a layer-decoupled strategy to obtain both visual and textual prior signals.
  • A Q-Former-based adapter compresses the VLM’s high-dimensional representations into compact prior tokens, enabling more effective conditional training of the diffusion model.
  • ViHOI is trained with motion-rendered images to enforce semantic alignment between reference visuals and motion sequences, and at inference it uses reference images synthesized by a text-to-image model to improve generalization to unseen objects and interaction categories.
  • Experiments report state-of-the-art results and stronger benchmark performance as well as improved generalization compared with prior methods.

Abstract

Generating realistic and physically plausible 3D Human-Object Interactions (HOI) remains a key challenge in motion generation. One primary reason is that describing these physical constraints with words alone is difficult. To address this limitation, we propose a new paradigm: extracting rich interaction priors from easily accessible 2D images. Specifically, we introduce ViHOI, a novel framework that enables diffusion-based generative models to leverage rich, task-specific priors from 2D images to enhance generation quality. We utilize a large Vision-Language Model (VLM) as a powerful prior-extraction engine and adopt a layer-decoupled strategy to obtain visual and textual priors. Concurrently, we design a Q-Former-based adapter that compresses the VLM's high-dimensional features into compact prior tokens, which significantly facilitates the conditional training of our diffusion model. Our framework is trained on motion-rendered images from the dataset to ensure strict semantic alignment between visual inputs and motion sequences. During inference, it leverages reference images synthesized by a text-to-image generation model to improve generalization to unseen objects and interaction categories. Experimental results demonstrate that ViHOI achieves state-of-the-art performance, outperforming existing methods across multiple benchmarks and demonstrating superior generalization.