Rethinking Multimodal Fusion for Time Series: Auxiliary Modalities Need Constrained Fusion

arXiv cs.AI / 3/25/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper finds that adding auxiliary modalities like text or vision to time series forecasting often yields limited or inconsistent improvements, and in many cases naive fusion (e.g., addition/concatenation) can underperform unimodal time-series models.
  • The authors attribute this to uncontrolled integration of auxiliary information that may be irrelevant to the time-series dynamics, which hurts generalization across datasets and architectures.
  • They evaluate multiple constrained fusion strategies that regulate cross-modal integration and show these methods consistently outperform naive fusion approaches.
  • The proposed Controlled Fusion Adapter (CFA) is a plug-in technique that adds controlled cross-modal interactions using low-rank adapters to filter irrelevant textual signals before fusing them into temporal representations, without changing the time-series backbone.
  • Extensive evaluation (over 20K experiments across datasets and TS/text model variants) supports the effectiveness of constrained fusion methods, and the authors release code publicly.

Abstract

Recent advances in multimodal learning have motivated the integration of auxiliary modalities such as text or vision into time series (TS) forecasting. However, most existing methods provide limited gains, often improving performance only in specific datasets or relying on architecture-specific designs that limit generalization. In this paper, we show that multimodal models with naive fusion strategies (e.g., simple addition or concatenation) often underperform unimodal TS models, which we attribute to the uncontrolled integration of auxiliary modalities which may introduce irrelevant information. Motivated by this observation, we explore various constrained fusion methods designed to control such integration and find that they consistently outperform naive fusion methods. Furthermore, we propose Controlled Fusion Adapter (CFA), a simple plug-in method that enables controlled cross-modal interactions without modifying the TS backbone, integrating only relevant textual information aligned with TS dynamics. CFA employs low-rank adapters to filter irrelevant textual information before fusing it into temporal representations. We conduct over 20K experiments across various datasets and TS/text models, demonstrating the effectiveness of the constrained fusion methods including CFA. Code is publicly available at: https://github.com/seunghan96/cfa/.