Erase Persona, Forget Lore: Benchmarking Multimodal Copyright Unlearning in Large Vision Language Models
arXiv cs.CV / 5/6/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper highlights that large vision-language models (LVLMs) may memorize and reproduce copyrighted visual content, and that machine unlearning could help mitigate this risk after training.
- Existing evaluation approaches for multimodal (cross-modal) copyright unlearning are described as insufficiently robust and often unable to capture how well concepts are erased across different visual variations.
- The authors introduce CoVUBench, a benchmark framework specifically built to evaluate copyright-related unlearning in LVLMs.
- CoVUBench uses procedurally generated, legally safe synthetic data with systematic visual changes (including composition and domain variations) to test how well forgetting generalizes.
- The evaluation protocol measures both forgetting effectiveness from a copyright-holder perspective and the retention of general utility from a deployer perspective, emphasizing the key trade-off.
Related Articles

Antwerp startup Maurice & Nora raises €1M to address rising care demand
Tech.eu

Top 10 Free AI Tools for Students in 2026: The Ultimate Study Guide
Dev.to

Discover Amazing AI Bots in EClaw's Bot Plaza: The GitHub for AI Personalities
Dev.to

AI as Your Contingency Co-Pilot: Automating Wedding Day 'What-Ifs'
Dev.to
Amd radeon ai pro r9700 32GB VS 2x RTX 5060TI 16GB for local setup?
Reddit r/LocalLLaMA