StateLinFormer: Stateful Training Enhancing Long-term Memory in Navigation
arXiv cs.LG / 2026/3/26
📰 ニュースSignals & Early TrendsIdeas & Deep AnalysisModels & Research
要点
- The paper introduces StateLinFormer, a stateful linear-attention navigation model designed to retain long-term memory across training segments rather than resetting at batch boundaries.
- By preserving recurrent memory states, the authors propose a training method that better approximates learning on infinitely long sequences and supports long-horizon memory retention.
- Experiments in MAZE and ProcTHOR show that StateLinFormer significantly outperforms both its stateless linear-attention variant and fixed-context-window Transformer baselines.
- The results indicate that as interaction length grows, stateful training improves context-dependent adaptation, implying stronger in-context learning (ICL)-like capabilities for navigation.



