Do VLMs Truly "Read" Candlesticks? A Multi-Scale Benchmark for Visual Stock Price Forecasting

arXiv cs.LG / 4/15/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that prior benchmarks for vision-language models (VLMs) in visual stock price forecasting do not adequately test whether models truly understand candlestick chart patterns from visual inputs.
  • It introduces a new multi-scale candlestick dataset and standardized evaluation framework designed to reflect how human analysts integrate long-term trends with short-term inflection cues.
  • The evaluation uses confusion-matrix diagnostics and information coefficient (IC) time-series metrics, with XGBoost included as a feature-based temporal baseline.
  • Benchmarking representative VLMs shows they often work mainly in persistent uptrend or downtrend regimes, but perform weakly in more typical market conditions.
  • The study finds meaningful prediction biases and limited sensitivity to user-specified forecast horizons, suggesting constraints in VLMs’ precise temporal reasoning.

Abstract

Vision-language models(VLMs) are increasingly applied to visual stock price forecasting, yet existing benchmarks inadequately evaluate their understanding of stock price in candlestick charts. First, prior studies fail to isolate VLMs' comprehension of visual inputs genuinely improves predictive performance and whether VLMs truly comprehend candlestick patterns. Further, most existing datasets and evaluation setups are designed around single-period or tabular inputs. However, human analysts strongly rely on multi-scale candlestick charts, where longer-term horizons capture trend direction and shorter-term horizons provide cues for inflection points, making it difficult to systematically assess VLMs' ability to integrate short-term and long-term visual market dynamics. To bridge this gap, we construct a multi-scale candlestick charts dataset and a standardized evaluation framework to assess VLMs' ability to utilize multi-scale visual market signals. Evaluation combines confusion-matrix-based diagnostics with information coefficient(IC) time series metrics and includes XGBoost as a feature-based temporal baseline. Using this dataset, we benchmark representative VLMs and analyze their ability to leverage multi-scale stock price data. Experimental results show that most VLMs perform well only under persistent uptrend or downtrend conditions, while exhibiting weak predictive capability in more common market scenarios. We also identify significant prediction biases and limited sensitivity to explicitly specified forecast horizons in prompts, indicating inherent limitations in precise temporal reasoning.