PRISM: A Multi-View Multi-Capability Retail Video Dataset for Embodied Vision-Language Models
arXiv cs.CV / 4/1/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- PRISM is a new, 270K-sample multi-view retail video supervised fine-tuning dataset designed for embodied vision-language models in real-world supermarket settings.
- The dataset is built on a 3D knowledge ontology covering spatial knowledge, temporal/physical knowledge, and embodied action knowledge, enabling evaluation across 20+ capability probes.
- PRISM includes diverse viewpoints (egocentric, exocentric, and 360°) from five supermarket locations, with multiple supervision formats such as open-ended, chain-of-thought, and multiple-choice.
- Fine-tuning embodied VLMs on PRISM significantly lowers error rates across all probes by 66.6% versus the pre-trained baseline, with the largest improvements in embodied action understanding (+36.4% accuracy).
- The authors position PRISM as one of the largest domain-specific video SFT corpora (about 11.8M frames and ~730M tokens) and release the dataset at dreamvu.ai/prism.
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.
Related Articles

Black Hat Asia
AI Business

Show HN: 1-Bit Bonsai, the First Commercially Viable 1-Bit LLMs
Dev.to

I Built an AI Agent That Can Write Its Own Tools When It Gets Stuck
Dev.to

How to Create AI Videos in 20 Minutes (3 Free Tools, Zero Experience)
Dev.to

Agent Self-Discovery: How AI Agents Find Their Own Wallets
Dev.to