V-DyKnow: A Dynamic Benchmark for Time-Sensitive Knowledge in Vision Language Models
arXiv cs.AI / 3/18/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- V-DyKnow presents a Visual Dynamic Knowledge benchmark designed to evaluate time-sensitive factual knowledge in Vision-Language Models across multimodal inputs (images and text).
- The study benchmarks both closed- and open-source VLMs and analyzes reliability of responses across modalities and input perturbations, as well as the effectiveness of knowledge editing and multi-modal RAG methods for updating knowledge.
- Findings show that VLMs frequently produce outdated facts due to static training snapshots, with factual reliability degrading from textual to visual stimuli.
- The authors release the benchmark, code, and evaluation data to enable broader research and evaluation of how VLMs acquire and update time-sensitive knowledge across modalities.
Related Articles
Is AI becoming a bubble, and could it end like the dot-com crash?
Reddit r/artificial

Externalizing State
Dev.to

I made a 'benchmark' where LLMs write code controlling units in a 1v1 RTS game.
Dev.to

My AI Does Not Have a Clock
Dev.to
How to settle on a coding LLM ? What parameters to watch out for ?
Reddit r/LocalLLaMA