Selective Contrastive Learning For Gloss Free Sign Language Translation

arXiv cs.CL / 4/27/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses gloss-free sign language translation by focusing on the cross-modal alignment problem between sign videos and written text.
  • It argues that CLIP-like vision-language pretraining can suffer from noisy supervision because random in-batch negatives may be semantically similar or even identical pairs mislabeled as negatives.
  • Through a trajectory-based analysis of negative video-text similarity during training, the authors find that only a small subset of negatives behave consistently in the way contrastive learning requires.
  • They propose Selective Contrastive Learning for SLT (SCL-SLT), using a Pair Selection (PS) method to score candidate negatives from similarity dynamics across reference checkpoints and build mini-batches with a curriculum that increasingly targets harder, more informative negatives.
  • The expected outcome is stronger contrastive supervision and improved alignment by reducing the impact of uninformative or semantically invalid negatives.

Abstract

Sign language translation (SLT) converts continuous sign videos into spoken-language text, yet it remains challenging due to the intrinsic modality mismatch between visual signs and written text, particularly in gloss-free settings. Recent SLT systems increasingly adopt CLIP-like Vision-Language pretraining (VLP) for cross-modal alignment, but the random in-batch contrast provides few, batch-dependent negatives and may mislabel semantically similar (or even identical) pairs as negatives, introducing noisy and potentially inconsistent alignment supervision. In this work, we first conduct a preliminary trajectory-based analysis that tracks negative video-text similarity over training. The results show that only a small subset of negatives exhibits the desired behavior of being consistently pushed away, while the remaining negatives display heterogeneous and often non-decreasing similarity dynamics, suggesting that random in-batch negatives are frequently uninformative for effective alignment. Inspired by this, we propose Selective Contrastive Learning for SLT (SCL-SLT) with a Pair Selection (PS) strategy. PS scores candidate negatives using similarity dynamics from reference checkpoints and constructs mini-batches via a curriculum that progressively emphasizes more challenging negatives, thereby strengthening contrastive supervision while reducing the influence of noisy or semantically invalid negatives.