Decoding the decoder: Contextual sequence-to-sequence modeling for intracortical speech decoding

arXiv cs.CL / 3/24/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies whether contextual sequence-to-sequence decoding improves intracortical speech-to-language decoding versus prior approaches that mainly use framewise phoneme decoding plus language models.
  • It proposes a multitask Transformer encoder–decoder that jointly predicts phoneme sequences, word sequences, and auxiliary acoustic features from area 6v intracortical recordings.
  • To handle day-to-day neural nonstationarity, the authors introduce the Neural Hammer Scalpel (NHS) calibration module, combining global alignment with feature-wise modulation.
  • On the Willett et al. dataset, the method reports state-of-the-art performance for phonemes (14.3% error rate) and improved word decoding (25.6% WER with direct decoding; 19.4% WER with candidate generation and rescoring).
  • Analysis of held-out days and attention patterns suggests performance degrades with temporal distance while attention-based representations exhibit recurring temporal chunking that differs in how phoneme vs. word decoders use segments.

Abstract

Speech brain--computer interfaces require decoders that translate intracortical activity into linguistic output while remaining robust to limited data and day-to-day variability. While prior high-performing systems have largely relied on framewise phoneme decoding combined with downstream language models, it remains unclear what contextual sequence-to-sequence decoding contributes to sublexical neural readout, robustness, and interpretability. We evaluated a multitask Transformer-based sequence-to-sequence model for attempted speech decoding from area 6v intracortical recordings. The model jointly predicts phoneme sequences, word sequences, and auxiliary acoustic features. To address day-to-day nonstationarity, we introduced the Neural Hammer Scalpel (NHS) calibration module, which combines global alignment with feature-wise modulation. We further analyzed held-out-day generalization and attention patterns in the encoder and decoders. On the Willett et al. dataset, the proposed model achieved a state-of-the-art phoneme error rate of 14.3%. Word decoding reached 25.6% WER with direct decoding and 19.4% WER with candidate generation and rescoring. NHS substantially improved both phoneme and word decoding relative to linear or no day-specific transform, while held-out-day experiments showed increasing degradation on unseen days with temporal distance. Attention visualizations revealed recurring temporal chunking in encoder representations and distinct use of these segments by phoneme and word decoders. These results indicate that contextual sequence-to-sequence modeling can improve the fidelity of neural-to-phoneme readout from intracortical speech signals and suggest that attention-based analyses can generate useful hypotheses about how neural speech evidence is segmented and accumulated over time.