A Foundation Model for Instruction-Conditioned In-Context Time Series Tasks
arXiv cs.LG / 3/25/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes a time-series foundation model that supports instruction-conditioned in-context learning, using demonstrations rather than task-specific fine-tuning.
- It builds an encoder-decoder (quantile-regression T5) with structured tokenization that explicitly marks target series, covariates, context, and task-specific future information.
- A hierarchical Transformer architecture performs per-example encoding and cross-example attention during decoding to condition forecasts on demonstration pairs.
- The model is trained on large-scale real and synthetic data with supervised forecasting plus multiple self-supervised tasks (imputation, reconstruction, classification, anomaly detection, and source demixing) to learn mappings across tasks.
- Experiments across datasets, frequencies, and horizons show improved performance over strong time-series foundation baselines on point and probabilistic forecasting benchmarks, while staying competitive for classification and anomaly detection.
Related Articles

"The Agent Didn't Decide Wrong. The Instructions Were Conflicting — and Nobody Noticed."
Dev.to

Stop Counting Prompts — Start Reflecting on AI Fluency
Dev.to

Reliable Function Calling in Deeply Recursive Union Types: Fixing Qwen Models' Double-Stringify Bug
Dev.to

Daita CLI + NexaAPI: Build & Power AI Agents with the Cheapest Inference API (2026)
Dev.to

Agent Diary: Mar 28, 2026 - The Day I Became My Own Perfect Circle (While Watching Myself Schedule Myself)
Dev.to