OmniFit: Multi-modal 3D Body Fitting via Scale-agnostic Dense Landmark Prediction

arXiv cs.CV / 4/24/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces OmniFit, a multi-modal 3D human body fitting method that works with point clouds, partial depth, full scans, or images without requiring known metric scale.
  • OmniFit uses a conditional transformer decoder to map surface points directly to dense body landmarks, which are then leveraged to fit SMPL-X body parameters.
  • A plug-and-play image adapter can add visual cues to compensate when geometric information is incomplete, improving robustness across input types.
  • The approach also includes a scale predictor that normalizes subjects to canonical body proportions, enabling scale-agnostic fitting for both real and synthetic assets.
  • Experiments report large improvements (57.1% to 80.9%) over state-of-the-art methods and claim firsts such as beating multi-view optimization baselines and achieving millimeter-level accuracy on CAPE and 4D-DRESS.

Abstract

Fitting an underlying body model to 3D clothed human assets has been extensively studied, yet most approaches focus on either single-modal inputs such as point clouds or multi-view images alone, often requiring a known metric scale. This constraint is frequently impractical, especially for AI-generated assets where scale distortion is common. We propose OmniFit, a method that can seamlessly handle diverse multi-modal inputs, including full scans, partial depth observations, and image captures, while remaining scale-agnostic for both real and synthetic assets. Our key innovation is a simple yet effective conditional transformer decoder that directly maps surface points to dense body landmarks, which are then used for SMPL-X parameter fitting. In addition, an optional plug-and-play image adapter incorporates visual cues to compensate for missing geometric information. We further introduce a dedicated scale predictor that rescales subjects to canonical body proportions. OmniFit substantially outperforms state-of-the-art methods by 57.1 to 80.9 percent across daily and loose clothing scenarios. To the best of our knowledge, it is the first body fitting method to surpass multi-view optimization baselines and the first to achieve millimeter-level accuracy on the CAPE and 4D-DRESS benchmarks.