StateX: Enhancing RNN Recall via Post-training State Expansion

arXiv cs.CL / 4/27/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces StateX, a post-training framework designed to improve RNNs’ ability to recall information from long contexts by expanding their recurrent state size.
  • It targets a key limitation of recurrent models—long-context information is compressed into a fixed-size state—making accurate long-range recall difficult.
  • StateX applies architecture modifications for two RNN families (linear attention and state-space models) to scale state size while keeping parameter growth to none or negligible.
  • Experiments on RNNs up to about 1.3B parameters show improved recall and in-context learning performance without high post-training costs or degradation of other capabilities.

Abstract

Recurrent neural networks (RNNs), such as linear attention and state-space models, have gained popularity due to their constant per-token complexity when processing long contexts. However, these recurrent models struggle with tasks that require accurate recall of contextual information from long contexts, because all contextual information is compressed into a fixed-size recurrent state. Previous studies have shown that recall ability is positively correlated with the recurrent state size, yet directly training RNNs with large recurrent states results in high training costs. In this paper, we introduce StateX, a post-training framework that efficiently expands the states of pre-trained RNNs. For two popular classes of RNNs, linear attention and state-space models, we design post-training architectural modifications in StateX, to scale up the state size with no or negligible increase in model parameters. Experiments on models with up to 1.3B parameters demonstrate that StateX efficiently enhances the recall and in-context learning performance of RNNs without incurring high post-training costs or compromising other capabilities.