Thinking While Listening: Fast-Slow Recurrence for Long-Horizon Sequential Modeling

arXiv cs.LG / 4/3/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces “Fast-Slow Recurrence” for long-horizon sequential modeling by combining fast recurrent latent updates with slower, self-organizational updates tied to observation intervals.
  • It extends latent recurrent modeling to sequential input streams, aiming to learn stable internal structures that evolve coherently with the data over time.
  • The proposed mechanism is designed to keep representations clustered and consistent across long horizons rather than drifting like in some sequential baseline behaviors.
  • Experiments reported in the abstract suggest improved out-of-distribution generalization for reinforcement learning and algorithmic tasks versus LSTM, state space models, and Transformer variants.

Abstract

We extend the recent latent recurrent modeling to sequential input streams. By interleaving fast, recurrent latent updates with self-organizational ability between slow observation updates, our method facilitates the learning of stable internal structures that evolve alongside the input. This mechanism allows the model to maintain coherent and clustered representations over long horizons, improving out-of-distribution generalization in reinforcement learning and algorithmic tasks compared to sequential baselines such as LSTM, state space models, and Transformer variants.