Time Blindness: Why Video-Language Models Can't See What Humans Can?

arXiv cs.CV / 4/30/2026

💬 OpinionDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • The paper argues that current vision-language models (VLMs) rely too heavily on frame-level spatial features, making them fail when videos require understanding of temporal-only patterns.
  • It introduces SpookyBench, a benchmark where the informative content is encoded exclusively in temporal sequences of noise-like frames, simulating signals such as biological signaling and covert communication.
  • In a striking human-vs-model comparison, humans classify the temporal sequences with over 98% accuracy, while state-of-the-art VLMs score 0% accuracy.
  • The authors show that when training data has low spatial signal-to-noise ratio (SNR), models’ temporal understanding degrades faster than humans, and the problem persists across different model scales and architectures.
  • The study concludes that addressing the limitation will likely require new architectures or training paradigms that decouple spatial dependencies from temporal processing, and it publicly releases the dataset and code.

Abstract

Recent advances in vision-language models (VLMs) have made impressive strides in understanding spatio-temporal relationships in videos. However, when spatial information is obscured, these models struggle to capture purely temporal patterns. We introduce \textbf{SpookyBench}, a benchmark where information is encoded solely in temporal sequences of noise-like frames, mirroring natural phenomena from biological signaling to covert communication. Interestingly, while humans can recognize shapes, text, and patterns in these sequences with over 98% accuracy, state-of-the-art VLMs achieve 0% accuracy. This performance gap highlights a critical limitation: an over-reliance on frame-level spatial features and an inability to extract meaning from temporal cues. Furthermore, when trained in data sets with low spatial signal-to-noise ratios (SNR), temporal understanding of models degrades more rapidly than human perception, especially in tasks requiring fine-grained temporal reasoning. Overcoming this limitation will require novel architectures or training paradigms that decouple spatial dependencies from temporal processing. Our systematic analysis shows that this issue persists across model scales and architectures. We release SpookyBench to catalyze research in temporal pattern recognition and bridge the gap between human and machine video understanding. Dataset and code has been made available on our project website: https://timeblindness.github.io/.

Time Blindness: Why Video-Language Models Can't See What Humans Can? | AI Navigate