JointFM-0.1: A Foundation Model for Multi-Target Joint Distributional Prediction

arXiv cs.LG / 2026/3/24

📰 ニュースIdeas & Deep AnalysisModels & Research

要点

  • The report proposes JointFM-0.1, a foundation model aimed at distributional forecasting for coupled (multi-target) time series under uncertainty.
  • Instead of fitting stochastic differential equations (SDEs) to data, JointFM trains by generating an infinite stream of synthetic SDEs and learning to predict future joint probability distributions directly.
  • The approach is presented as zero-shot, avoiding task-specific calibration or fine-tuning typically required for SDE-based modeling pipelines.
  • In experiments on unseen synthetic SDEs, JointFM is reported to reduce energy loss by 14.2% versus the strongest baseline while recovering “oracle” joint distributions.
  • The work reframes the role of SDEs from an explicit modeling target to a synthetic data generator for learning a general distribution predictor.

Abstract

Despite the rapid advancements in Artificial Intelligence (AI), Stochastic Differential Equations (SDEs) remain the gold-standard formalism for modeling systems under uncertainty. However, applying SDEs in practice is fraught with challenges: modeling risk is high, calibration is often brittle, and high-fidelity simulations are computationally expensive. This technical report introduces JointFM, a foundation model that inverts this paradigm. Instead of fitting SDEs to data, we sample an infinite stream of synthetic SDEs to train a generic model to predict future joint probability distributions directly. This approach establishes JointFM as the first foundation model for distributional predictions of coupled time series - requiring no task-specific calibration or finetuning. Despite operating in a purely zero-shot setting, JointFM reduces the energy loss by 14.2% relative to the strongest baseline when recovering oracle joint distributions generated by unseen synthetic SDEs.