AI Navigate

Baguan-TS: A Sequence-Native In-Context Learning Model for Time Series Forecasting with Covariates

arXiv cs.LG / 3/19/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • Baguan-TS introduces a sequence-native in-context learning framework for time series forecasting by a 3D Transformer that jointly attends to temporal, variable, and context axes.
  • It tackles calibration and training stability with a retrieval-based local calibration strategy and mitigates output oversmoothing via a context-overfitting approach.
  • On public benchmarks with covariates, Baguan-TS consistently outperforms baselines, achieving higher win rates and significant improvements in both point and probabilistic forecasting.
  • Real-world energy datasets demonstrate robustness and substantial forecasting gains, indicating practical potential in energy forecasting and other covariate-rich domains.

Abstract

Transformers enable in-context learning (ICL) for rapid, gradient-free adaptation in time series forecasting, yet most ICL-style approaches rely on tabularized, hand-crafted features, while end-to-end sequence models lack inference-time adaptation. We bridge this gap with a unified framework, Baguan-TS, which integrates the raw-sequence representation learning with ICL, instantiated by a 3D Transformer that attends jointly over temporal, variable, and context axes. To make this high-capacity model practical, we tackle two key hurdles: (i) calibration and training stability, improved with a feature-agnostic, target-space retrieval-based local calibration; and (ii) output oversmoothing, mitigated via context-overfitting strategy. On public benchmark with covariates, Baguan-TS consistently outperforms established baselines, achieving the highest win rate and significant reductions in both point and probabilistic forecasting metrics. Further evaluations across diverse real-world energy datasets demonstrate its robustness, yielding substantial improvements.