State Space Models are Effective Sign Language Learners: Exploiting Phonological Compositionality for Vocabulary-Scale Recognition

arXiv cs.CV / 4/13/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • Sign language recognition models often fail to scale to realistic vocabularies because they treat signs as atomic visual patterns instead of leveraging the language’s phonological compositional structure.
  • The paper proposes PHONSSM, which enforces phonological decomposition using anatomically grounded graph attention, explicit factorization into orthogonal subspaces, and prototype-based classification for few-shot transfer.
  • Trained on skeleton data only, PHONSSM attains 72.1% on WLASL2000, outperforming skeleton-state-of-the-art by +18.4 percentage points and surpassing many RGB approaches without using video.
  • The approach shows especially large improvements in the few-shot setting (+225% relative) and demonstrates zero-shot transfer to ASL Citizen that beats supervised RGB baselines.
  • The authors conclude the vocabulary scaling bottleneck is largely a representation learning issue and can be addressed with compositional inductive biases aligned with linguistic structure.

Abstract

Sign language recognition suffers from catastrophic scaling failure: models achieving high accuracy on small vocabularies collapse at realistic sizes. Existing architectures treat signs as atomic visual patterns, learning flat representations that cannot exploit the compositional structure of sign languages-systematically organized from discrete phonological parameters (handshape, location, movement, orientation) reused across the vocabulary. We introduce PHONSSM, enforcing phonological decomposition through anatomically-grounded graph attention, explicit factorization into orthogonal subspaces, and prototypical classification enabling few-shot transfer. Using skeleton data alone on the largest ASL dataset ever assembled (5,565 signs), PHONSSM achieves 72.1% on WLASL2000 (+18.4pp over skeleton SOTA), surpassing most RGB methods without video input. Gains are most dramatic in the few-shot regime (+225% relative), and the model transfers zero-shot to ASL Citizen, exceeding supervised RGB baselines. The vocabulary scaling bottleneck is fundamentally a representation learning problem, solvable through compositional inductive biases mirroring linguistic structure.