Evaluation of Pose Estimation Systems for Sign Language Translation

arXiv cs.CL / 4/28/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper compares multiple pose estimators used in pose-based sign language translation (SLT), treating pose estimation as a key experimental variable rather than an implementation detail.
  • It evaluates both common baselines (MediaPipe Holistic, OpenPose) and newer whole-body or high-capacity models (e.g., MMPose WholeBody, OpenPifPaf, AlphaPose, SDPose, Sapiens, SMPLest-X) using a controlled SLT training setup on RWTH-PHOENIX-Weather 2014.
  • Translation quality is measured with BLEU and BLEURT, with SDPose and Sapiens achieving the strongest performance (BLEU around 11.5) versus MediaPipe’s weaker baseline (BLEU around 10).
  • Robustness analysis on higher-resolution Signsuisse videos shows Sapiens performs best under occlusion (15/15 correct), while OpenPifPaf largely fails (1/15), and missing hand keypoints correlates with lower translation scores.
  • The authors release code to reproduce the study and to make it easier for researchers to test alternative pose estimators in pose-based SLT pipelines.

Abstract

Many sign language translation (SLT) systems operate on pose sequences instead of raw video to reduce input dimensionality, improve portability, and partially anonymize signers. The choice of pose estimator is often treated as an implementation detail, with systems defaulting to widely available tools such as MediaPipe Holistic or OpenPose. We present a systematic comparison of pose estimators for pose-based SLT, covering widely used baselines (MediaPipe Holistic, OpenPose) and newer whole-body/high-capacity models (MMPose WholeBody, OpenPifPaf, AlphaPose, SDPose, Sapiens, SMPLest-X). We quantify downstream impact by training a controlled SLT pipeline on RWTH-PHOENIX-Weather 2014 where only the pose representation varies, evaluating with BLEU and BLEURT. To contextualize translation outcomes, we analyze temporal stability, missing hand keypoints, and robustness to occlusion using higher-resolution videos from the Signsuisse dataset. SDPose and Sapiens achieve the best translation performance (BLEU ~11.5), outperforming the common MediaPipe baseline (BLEU ~10). In occlusion cases, Sapiens is correct in all tested instances (15/15), while OpenPifPaf fails in nearly all (1/15) and also yields the weakest translation scores. Estimators that frequently leave out hand keypoints are associated with lower BLEU/BLEURT. We release code that can be used not only to reproduce our experiments, but also considerably lowers the barrier for other researchers to use alternative pose estimators.