DGHMesh: A Large-scale Dual-radar mmWave Dataset and Generalization-focused Benchmark for Human Mesh Reconstruction

arXiv cs.CV / 4/28/2026

📰 NewsSignals & Early TrendsTools & Practical UsageModels & Research

Key Points

  • The paper introduces DGHMesh, a large-scale dual-mmWave radar dataset plus a benchmark specifically designed to evaluate human mesh reconstruction (HMR) under configuration shifts for better generalization testing.
  • DGHMesh includes synchronized data from FMCW radar, SFCW radar, RGB images, and high-precision 3D HMR annotations, totaling 360,000 frames from 15 subjects performing 8 actions.
  • The benchmark provides synchronized raw I/Q radar data and accurately calibrated radar spatial positions, enabling fair comparisons across different measurement setups and algorithm variants.
  • It also proposes mmPTM, a query-based multi-radar fusion framework that combines point clouds and imaging tubes, and reports strong accuracy and competitive generalization across multiple sub-benchmarks.
  • DGHMesh and the associated code will be publicly available via GitHub, with the full benchmark and code to be released after the paper publication.

Abstract

Millimeter-wave (mmWave) radar has shown great potential for contactless, privacy-preserving, and robust human sensing, yet existing mmWave-based human mesh reconstruction (HMR) studies are still limited by the lack of benchmarks for generalization analysis under configuration shifts and fair comparison of different algorithms. To address the limitation, we present DGHMesh, a large-scale dual-radar mmWave dataset and generalization-focused benchmark for HMR. It contains data from 15 subjects performing 8 actions, with 360,000 synchronized frames collected from FMCW radar, SFCW radar, RGB images, and high-precision 3D HMR annotations. In addition, the dataset provides synchronized raw I/Q data from both radar modalities and accurately calibrated radar spatial positions. The benchmark is designed to evaluate HMR methods under diverse measurement configurations, including human position shifts, human orientation shifts, subarray size variations, and cross-subject settings. Based on DGHMesh, we also propose mmPTM, a query-based multi-radar fusion framework that jointly exploits point clouds and imaging tubes for HMR. Extensive experiments are conducted against representative baselines under different settings. The results demonstrate that mmPTM consistently achieves outstanding accuracy and competitive generalization capability across multiple sub-benchmarks, validating the effectiveness of multi-radar fusion and the practical value of the proposed dataset and benchmark for mmWave-based HMR research. DGHMesh and mmPTM are publicly available at https://github.com/SPIresearch/DGHMesh.(The complete benchmark and code will be released after paper publication)