Automatic Speech Recognition for Documenting Endangered Languages: Case Study of Ikema Miyakoan

arXiv cs.AI / 3/30/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper presents an ongoing case study developing an automatic speech recognition (ASR) system to document Ikema Miyakoan, a severely endangered Ryukyuan language in Okinawa, Japan.
  • The authors build a speech corpus from field recordings and report training an ASR model with a character error rate as low as 15%.
  • The study evaluates how ASR assistance affects transcription efficiency and finds that it can substantially reduce both transcription time and cognitive load.
  • The work positions ASR as a practical, scalable technology-enabled pathway for endangered-language documentation and potential revitalization efforts.

Abstract

Language endangerment poses a major challenge to linguistic diversity worldwide, and technological advances have opened new avenues for documentation and revitalization. Among these, automatic speech recognition (ASR) has shown increasing potential to assist in the transcription of endangered language data. This study focuses on Ikema, a severely endangered Ryukyuan language spoken in Okinawa, Japan, with approximately 1,300 remaining speakers, most of whom are over 60 years old. We present an ongoing effort to develop an ASR system for Ikema based on field recordings. Specifically, we (1) construct a {\totaldatasethours}-hour speech corpus from field recordings, (2) train an ASR model that achieves a character error rate as low as 15\%, and (3) evaluate the impact of ASR assistance on the efficiency of speech transcription. Our results demonstrate that ASR integration can substantially reduce transcription time and cognitive load, offering a practical pathway toward scalable, technology-supported documentation of endangered languages.