MAEPose: Self-Supervised Spatiotemporal Learning for Human Pose Estimation on mmWave Video

arXiv cs.AI / 5/4/2026

💬 OpinionModels & Research

Key Points

  • The paper introduces MAEPose, a masked autoencoding method that performs human pose estimation directly on mmWave spectrogram video, aiming to preserve radar spatiotemporal information rather than relying on pre-extracted representations.
  • MAEPose is trained using unlabelled radar video to learn motion-aware generalized representations, and then uses a heatmap decoder to produce multi-frame pose predictions.
  • Experiments on three datasets using leave-one-person-out cross-validation show MAEPose outperforms prior baselines by up to 22.1% in MPJPE with statistical significance (p<0.05).
  • The model remains relatively robust in zero-shot scenarios with bystander interference, showing only a 6.5% error increase, and ablation studies highlight the importance of both pre-training and the heatmap decoder.
  • Modality analysis finds that using Range-Doppler video as input yields better pose performance than Range-Azimuth (or their fusion) while also reducing computational cost.

Abstract

Millimetre-wave (mmWave) radar offers a more privacy-preserving alternative to RGB-based human pose estimation. However, existing methods typically rely on pre-extracted intermediate representations such as sparse point clouds or spectrogram images, where the rich spatiotemporal information naturally present in radar video streams is discarded for model learning, while such signal processing adds system complexity. In addition, existing solutions are mainly conducted in an end-to-end supervised manner without leveraging unlabelled raw video streams to learn generalized representations. In this study, we present MAEPose, a masked autoencoding-based human pose estimation approach that operates directly on mmWave spectrogram videos. MAEPose learns spatiotemporal motion-aware generalized representations from unlabelled radar video, and leverages its heatmap decoder for multi-frame pose estimation predictions. We evaluate it across three datasets based on leave-one-person-out cross-validation with rigorous statistical testing. MAEPose consistently outperforms state-of-the-art baselines by up to 22.1% in MPJPE p<0.05, and maintains robust accuracy under zero-shot bystander interference with only a 6.5% error increase. Ablation studies confirm that both the pre-training and the heatmap decoder contribute substantially, while modality analysis indicates that leveraging Range-Doppler video as input achieves better pose estimation performance than Range-Azimuth or their fusion, with lower computational cost.