Elderly-Contextual Data Augmentation via Speech Synthesis for Elderly ASR

arXiv cs.CL / 4/29/2026

📰 NewsTools & Practical UsageModels & Research

Key Points

  • The paper proposes an elderly-contextual data augmentation pipeline for elderly ASR (EASR) by combining LLM-based transcript paraphrasing with text-to-speech (TTS) synthesis using elderly reference speakers.
  • Starting from an elderly speech dataset, the LLM generates elderly-contextual paraphrases, and the TTS model produces synthetic speech that is paired with those paraphrases to create new audio-text training examples.
  • The synthetic and original data are merged to fine-tune Whisper without changing the model architecture, aiming to mitigate EASR’s limited-data and distinct speech characteristics.
  • Experiments on English and Korean elderly datasets (70+ speakers) show consistent gains over conventional augmentation baselines, including up to a 58.2% WER reduction versus the Whisper baseline.
  • The authors also study how augmentation ratio and the mix of reference speakers affect performance in low-resource EASR settings.

Abstract

Despite recent progress in automatic speech recognition (ASR), elderly ASR (EASR) remains challenging due to limited training data and the distinct acoustic and linguistic characteristics of elderly speech. In this work, we address data scarcity in EASR through a data augmentation pipeline that combines large language model (LLM)-based transcript paraphrasing with text-to-speech (TTS) synthesis. Given an elderly speech dataset, the LLM first generates elderly-contextual paraphrases of the original transcripts, and the TTS model then synthesizes corresponding speech using elderly reference speakers. The resulting synthetic audio-text pairs are merged with the original data to fine-tune Whisper without architectural modification. We further analyze the effects of augmentation ratio and reference-speaker composition in low-resource EASR. Experiments on English and Korean elderly speech datasets from speakers aged 70 and above show that the proposed method consistently improves performance over conventional augmentation baselines, achieving up to a 58.2% reduction in word error rate (WER) compared with the Whisper baseline.