VTBench: A Multimodal Framework for Time-Series Classification with Chart-Based Representations

arXiv cs.CV / 5/1/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces VTBench, a systematic and extensible framework that re-examines time-series classification (TSC) by combining raw sequences with chart-based visualizations via multimodal fusion.
  • Unlike common texture-to-image encodings such as Gramian Angular Fields and Recurrence Plots, VTBench focuses on lightweight, human-interpretable charts (line, area, bar, and scatter) to support more intuitive representations.
  • The framework uses a modular design enabling multiple fusion strategies, including fusing a single chart with numerical inputs, fusing multiple chart types, or performing full multimodal fusion with raw time-series data.
  • Experiments on 31 UCR datasets show that chart-only models can be competitive on certain tasks (especially smaller datasets), and that using multiple chart types can improve accuracy by capturing complementary visual cues.
  • The authors derive guidelines showing multimodal models help when visual features add non-redundant information, but can hurt performance when the visual features are redundant, indicating a need for careful chart/fusion selection.

Abstract

Time-series classification (TSC) has advanced significantly with deep learning, yet most models rely solely on raw numerical inputs, overlooking alternative representations. While texture-based encodings such as Gramian Angular Fields (GAF) and Recurrence Plots (RP) convert time series into 2D images, they often require heavy preprocessing and yield less intuitive representations. In contrast, chart-based visualizations offer more interpretable alternatives and show promise in specific domains; however, their effectiveness remains underexplored, with limited systematic evaluation across chart types, visual encoding choices, and datasets. In this work, we introduce VTBench, a systematic and extensible framework that re-examines TSC through multimodal fusion of raw sequences and chart-based visualizations. VTBench generates lightweight, human-interpretable plots -- line, area, bar, and scatter, providing complementary views of the same signal. We develop a modular architecture supporting multiple fusion strategies, including single-chart visual-numerical fusion, multi-chart visual fusion, and full multimodal fusion with raw inputs. Through experiments across 31 UCR datasets, we show that: (1) chart-only models are competitive in selected settings, particularly on smaller datasets; (2) combining multiple chart types can improve accuracy by capturing complementary visual cues; and (3) multimodal models improve or maintain performance when visual features provide non-redundant information, but may degrade accuracy when they introduce redundancy. We further distill practical guidelines for selecting chart types, fusion strategies, and configurations. VTBench establishes a unified foundation for interpretable and effective multimodal time-series classification.