SafeScreen: A Safety-First Screening Framework for Personalized Video Retrieval for Vulnerable Users
arXiv cs.CV / 4/7/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The article proposes SafeScreen, a safety-first framework for personalized open-domain video retrieval that prevents vulnerable users from being exposed to inappropriate or harmful content in care and child-directed settings.
- SafeScreen enforces individualized safety constraints as a prerequisite by screening candidate videos through a sequential approval/rejection pipeline rather than using engagement-optimized ranking.
- The system extracts safety criteria from user profiles, performs evidence-grounded assessments using adaptive question generation plus multimodal VideoRAG analysis, and then uses LLM-based decision-making to verify safety, appropriateness, and relevance.
- Evaluation in a dementia-care reminiscence case study with synthetic profiles shows SafeScreen prioritizes safety over engagement, diverging from YouTube-like ranking in the majority of cases while maintaining strong safety coverage and quality metrics.
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.
Related Articles

Black Hat Asia
AI Business

Amazon CEO takes aim at Nvidia, Intel, Starlink, more in annual shareholder letter
TechCrunch

Why Anthropic’s new model has cybersecurity experts rattled
Reddit r/artificial
Does the AI 2027 paper still hold any legitimacy?
Reddit r/artificial

Why Most Productivity Systems Fail (And What to Do Instead)
Dev.to