AI Navigate

STEP: Scientific Time-Series Encoder Pretraining via Cross-Domain Distillation

arXiv cs.LG / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • STEP proposes a unified encoder for scientific time series by cross-domain distillation from multiple foundation models trained on related time-series domains.
  • It introduces adaptive patching to handle extreme-length sequences and a statistics compensation scheme to accommodate diverse numerical scales.
  • The framework leverages cross-domain distillation to integrate knowledge from multiple foundation models into a single, transferable encoder.
  • Experiments across seven scientific time series tasks show STEP's effectiveness as both a model structure and a pretraining paradigm for scientific signals.
  • The work highlights how knowledge from domains like audio, general time series, and brain signals can complement each other for scientific signal representation learning.

Abstract

Scientific time series are central to scientific AI but are typically sparse, highly heterogeneous, and limited in scale, making unified representation learning particularly challenging. Meanwhile, foundation models pretrained on relevant time series domains such as audio, general time series, and brain signals contain rich knowledge, but their applicability to scientific signals remains underexplored. In this paper, we investigate the transferability and complementarity of foundation models from relevant time series domains, and study how to effectively leverage them to build a unified encoder for scientific time series. We first systematically evaluate relevant foundation models, showing the effectiveness of knowledge transfer to scientific tasks and their complementary strengths. Based on this observation, we propose STEP, a Scientific Time Series Encoder Pretraining framework via cross domain distillation. STEP introduces adaptive patching to handle extreme-length sequences and a statistics compensation scheme to accommodate diverse numerical scales. It further leverages cross-domain distillation to integrate knowledge from multiple foundation models into a unified encoder. By combining complementary representations across different domains, STEP learns general-purpose and transferable features tailored for scientific signals. Experiments on seven scientific time series tasks demonstrate that STEP provides both an effective structure and an effective pretraining paradigm, taking a STEP toward scientific time series representation learning.