AI Navigate

GenVideoLens: Where LVLMs Fall Short in AI-Generated Video Detection?

arXiv cs.CV / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • GenVideoLens is a fine-grained benchmark for evaluating LVLMs in AI-generated video detection, enabling dimension-wise assessment rather than binary classification.
  • The benchmark contains 400 highly deceptive AI-generated videos and 100 real videos, annotated by experts across 15 authenticity dimensions spanning perceptual, optical, physical, and temporal cues.
  • Eleven representative LVLMs are evaluated, revealing that models perform relatively well on perceptual cues but struggle with optical consistency, physical interactions, and temporal-causal reasoning.
  • Performance varies across models, with smaller open-source models sometimes outperforming stronger proprietary models on specific cues.
  • Temporal perturbation experiments indicate LVLMs underutilize temporal information, providing diagnostic guidance for future improvement of AI-generated video detectors.

Abstract

In recent years, AI-generated videos have become increasingly realistic and sophisticated. Meanwhile, Large Vision-Language Models (LVLMs) have shown strong potential for detecting such content. However, existing evaluation protocols largely treat the task as a binary classification problem and rely on coarse-grained metrics such as overall accuracy, providing limited insight into where LVLMs succeed or fail. To address this limitation, we introduce GenVideoLens, a fine-grained benchmark that enables dimension-wise evaluation of LVLM capabilities in AI-generated video detection. The benchmark contains 400 highly deceptive AI-generated videos and 100 real videos, annotated by experts across 15 authenticity dimensions covering perceptual, optical, physical, and temporal cues. We evaluate eleven representative LVLMs on this benchmark. Our analysis reveals a pronounced dimensional imbalance. While LVLMs perform relatively well on perceptual cues, they struggle with optical consistency, physical interactions, and temporal-causal reasoning. Model performance also varies substantially across dimensions, with smaller open-source models sometimes outperforming stronger proprietary models on specific authenticity cues. Temporal perturbation experiments further show that current LVLMs make limited use of temporal information. Overall, GenVideoLens provides diagnostic insights into LVLM behavior, revealing key capability gaps and offering guidance for improving future AI-generated video detection systems.