Learning to Query History: Nonstationary Classification via Learned Retrieval
arXiv cs.LG / 4/9/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes reframing nonstationary classification as time-series prediction by conditioning decisions on a sequence of historical labeled examples rather than only the current input.
- It introduces an end-to-end trained learned discrete retrieval module that selects relevant historical instances using input-dependent queries, enabling scalable retrieval from long histories.
- The retrieval mechanism is optimized jointly with the classifier using a score-based gradient estimator, avoiding the need to load all history into GPU memory during training and deployment.
- Experiments on synthetic benchmarks and the Amazon Reviews 23 electronics category demonstrate improved robustness to distribution shifts versus standard classifiers.
- The authors report that VRAM usage scales predictably with the length of the retrieved history sequence, supporting practical deployment with large stored corpora.
Related Articles

Why Anthropic’s new model has cybersecurity experts rattled
Reddit r/artificial
Does the AI 2027 paper still hold any legitimacy?
Reddit r/artificial

Why Most Productivity Systems Fail (And What to Do Instead)
Dev.to

Moving from proof of concept to production: what we learned with Nometria
Dev.to

Frontend Engineers Are Becoming AI Trainers
Dev.to