A cross-species neural foundation model for end-to-end speech decoding

arXiv cs.CL / 3/27/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces an end-to-end Brain-to-Text (BIT) neural framework for speech brain-computer interfaces that replaces cascaded phoneme-to-text pipelines with a single differentiable model.
  • A cross-task, cross-species pretrained neural encoder is used to produce representations that transfer to both attempted and imagined speech, enabling better cross-task generalization.
  • In a cascaded setup with an n-gram language model, the pretrained encoder achieves new state-of-the-art results on the Brain-to-Text ’24 and ’25 benchmarks.
  • When integrated end-to-end with audio large language models and trained using contrastive learning for cross-modal alignment, BIT substantially reduces word error rate from 24.69% to 10.22% versus the prior end-to-end approach.
  • The authors report that small-scale audio LLMs can meaningfully improve end-to-end decoding and that their method aligns embeddings across attempted and imagined speech for more robust performance.

Abstract

Speech brain-computer interfaces (BCIs) aim to restore communication for people with paralysis by translating neural activity into text. Most systems use cascaded frameworks that decode phonemes before assembling sentences with an n-gram language model (LM), preventing joint optimization of all stages simultaneously. Here, we introduce an end-to-end Brain-to-Text (BIT) framework that translates neural activity into coherent sentences using a single differentiable neural network. Central to our approach is a cross-task, cross-species pretrained neural encoder, whose representations transfer to both attempted and imagined speech. In a cascaded setting with an n-gram LM, the pretrained encoder establishes a new state-of-the-art (SOTA) on the Brain-to-Text '24 and '25 benchmarks. Integrated end-to-end with audio large language models (LLMs) and trained with contrastive learning for cross-modal alignment, BIT reduces the word error rate (WER) of the prior end-to-end method from 24.69% to 10.22%. Notably, we find that small-scale audio LLMs markedly improve end-to-end decoding. Beyond record-setting performance, BIT aligns attempted and imagined speech embeddings to enable cross-task generalization. Altogether, our approach advances the integration of large, diverse neural datasets, paving the way for an end-to-end decoding framework that supports seamless, differentiable optimization.
広告