Think in Latent Thoughts: A New Paradigm for Gloss-Free Sign Language Translation

arXiv cs.CV / 4/17/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • The paper argues that gloss-free sign language translation (SLT) should be treated primarily as cross-modal reasoning rather than a direct video-to-text mapping, because meaning is constructed dynamically using context, space, and movement.
  • It introduces a reasoning-driven SLT framework that uses an ordered sequence of “latent thoughts” as an intermediate representation between video inputs and generated text.
  • The approach applies a plan-then-ground decoding strategy, where the model first plans what to say and then grounds that plan by looking back at the video evidence to improve coherence and faithfulness.
  • The authors also released a new large-scale gloss-free SLT dataset designed with stronger context dependencies and more realistic meanings, reporting consistent benchmark gains versus existing methods.
  • The project will publish code and data upon acceptance, with a planned release at https://github.com/fletcherjiang/SignThought.

Abstract

Many SLT systems quietly assume that brief chunks of signing map directly to spoken-language words. That assumption breaks down because signers often create meaning on the fly using context, space, and movement. We revisit SLT and argue that it is mainly a cross-modal reasoning task, not just a straightforward video-to-text conversion. We thus introduce a reasoning-driven SLT framework that uses an ordered sequence of latent thoughts as an explicit middle layer between the video and the generated text. These latent thoughts gradually extract and organize meaning over time. On top of this, we use a plan-then-ground decoding method: the model first decides what it wants to say, and then looks back at the video to find the evidence. This separation improves coherence and faithfulness. We also built and released a new large-scale gloss-free SLT dataset with stronger context dependencies and more realistic meanings. Experiments across several benchmarks show consistent gains over existing gloss-free methods. Code and data will be released upon acceptance at https://github.com/fletcherjiang/SignThought.