AI Navigate

SignSparK: Efficient Multilingual Sign Language Production via Sparse Keyframe Learning

arXiv cs.CV / 3/12/2026

📰 NewsModels & Research

Key Points

  • The paper tackles the trade-off in sign language production between direct text-to-pose models and dictionary-retrieval methods, proposing sparse keyframes to better capture the underlying kinematic distribution of signing.
  • It introduces FAST, an ultra-efficient sign segmentation model for automatic temporal boundary mining, and SignSparK, a large-scale Conditional Flow Matching framework that synthesizes 3D signing sequences in SMPL-X and MANO using the keyframes.
  • The approach enables Keyframe-to-Pose generation for precise spatiotemporal editing and achieves high-fidelity synthesis in fewer than ten sampling steps, scalable across four sign languages.
  • Evaluations show state-of-the-art performance on diverse SLP tasks and multilingual benchmarks, aided by 3D Gaussian Splatting for photorealistic rendering.

Abstract

Generating natural and linguistically accurate sign language avatars remains a formidable challenge. Current Sign Language Production (SLP) frameworks face a stark trade-off: direct text-to-pose models suffer from regression-to-the-mean effects, while dictionary-retrieval methods produce robotic, disjointed transitions. To resolve this, we propose a novel training paradigm that leverages sparse keyframes to capture the true underlying kinematic distribution of human signing. By predicting dense motion from these discrete anchors, our approach mitigates regression-to-the-mean while ensuring fluid articulation. To realize this paradigm at scale, we first introduce FAST, an ultra-efficient sign segmentation model that automatically mines precise temporal boundaries. We then present SignSparK, a large-scale Conditional Flow Matching (CFM) framework that utilizes these extracted anchors to synthesize 3D signing sequences in SMPL-X and MANO spaces. This keyframe-driven formulation also uniquely unlocks Keyframe-to-Pose (KF2P) generation, making precise spatiotemporal editing of signing sequences possible. Furthermore, our adopted reconstruction-based CFM objective also enables high-fidelity synthesis in fewer than ten sampling steps; this allows SignSparK to scale across four distinct sign languages, establishing the largest multilingual SLP framework to date. Finally, by integrating 3D Gaussian Splatting for photorealistic rendering, we demonstrate through extensive evaluation that SignSparK establishes a new state-of-the-art across diverse SLP tasks and multilingual benchmarks.