AI Navigate

静的手キーポイントに基づく異言語間少数ショット手話認識のための幾何学認識メトリック学習

arXiv cs.CV / 2026/3/11

Ideas & Deep AnalysisModels & Research

要点

  • 本論文は、静的手キーポイントを用いた異言語間少数ショット学習において、幾何学認識メトリック学習フレームワークを提案し、リソースが限られた環境での手話認識(SLR)の課題に取り組んでいます。
  • MediaPipeの手キーポイントから導出された20次元の関節間角度記述子を導入し、回転、平行移動、スケーリングに不変であるため、異なるカメラ視点や手の大きさによるドメインシフトを低減します。
  • 提案手法は、四つの多様な指文字アルファベットにおいて精度を大幅に向上させ、軽量なMLPエンコーダを用いて固定された異言語間転移を可能にし、多くの場合でドメイン内性能を上回ります。
  • これらの結果は、注釈付きデータが乏しい言語に適用可能な、携帯性と堅牢性を備えたSLRシステム構築における不変な手の幾何学特徴の価値を示しています。
  • 本手法は、少数ショット転移学習を活用することで大規模なラベル付きコーパスに依存しない拡張性のある代替手段を提供し、系統的に多様な手話言語の実用的SLR技術の進展に貢献します。

Computer Science > Computer Vision and Pattern Recognition

arXiv:2603.09213 (cs)
[Submitted on 10 Mar 2026]

Title:Geometry-Aware Metric Learning for Cross-Lingual Few-Shot Sign Language Recognition on Static Hand Keypoints

View a PDF of the paper titled Geometry-Aware Metric Learning for Cross-Lingual Few-Shot Sign Language Recognition on Static Hand Keypoints, by Chayanin Chamachot and 1 other authors
View PDF HTML (experimental)
Abstract:Sign language recognition (SLR) systems typically require large labeled corpora for each language, yet the majority of the world's 300+ sign languages lack sufficient annotated data. Cross-lingual few-shot transfer, pretraining on a data-rich source language and adapting with only a handful of target-language examples, offers a scalable alternative, but conventional coordinate-based keypoint representations are susceptible to domain shift arising from differences in camera viewpoint, hand scale, and recording conditions. This shift is particularly detrimental in the few-shot regime, where class prototypes estimated from only K examples are highly sensitive to extrinsic variance. We propose a geometry-aware metric-learning framework centered on a compact 20-dimensional inter-joint angle descriptor derived from MediaPipe static hand keypoints. These angles are invariant to SO(3) rotation, translation, and isotropic scaling, eliminating the dominant sources of cross-dataset shift and yielding tighter, more stable class prototypes. Evaluated on four fingerspelling alphabets spanning typologically diverse sign languages, ASL, LIBRAS, Arabic Sign Language, and Thai Sign Language, the proposed angle features improve over normalized-coordinate baselines by up to 25 percentage points within-domain and enable frozen cross-lingual transfer that frequently exceeds within-domain accuracy, using a lightweight MLP encoder with about 10^5 parameters. These findings demonstrate that invariant hand-geometry descriptors provide a portable and effective foundation for cross-lingual few-shot SLR in low-resource settings.
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:2603.09213 [cs.CV]
  (or arXiv:2603.09213v1 [cs.CV] for this version)
  https://doi.org/10.48550/arXiv.2603.09213
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Chayanin Chamachot [view email]
[v1] Tue, 10 Mar 2026 05:31:46 UTC (262 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled Geometry-Aware Metric Learning for Cross-Lingual Few-Shot Sign Language Recognition on Static Hand Keypoints, by Chayanin Chamachot and 1 other authors
  • View PDF
  • HTML (experimental)
  • TeX Source
Current browse context:
cs.CV
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.