MDS-VQA: Model-Informed Data Selection for Video Quality Assessment
arXiv cs.CV / 3/13/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- MDS-VQA proposes a model-informed data selection mechanism to curate unlabeled videos that are both difficult for the base VQA model and diverse in content.
- Difficulty is estimated by a failure predictor trained with a ranking objective, and diversity is measured using deep semantic video features, with a greedy procedure balancing the two under a constrained labeling budget.
- Experiments across multiple VQA datasets show that using only 5% of labeled samples yields a meaningful boost, improving mean SRCC from 0.651 to 0.722 and achieving the top gMAD rank.
- The work demonstrates the value of data-centric selection for active fine-tuning, highlighting a practical approach to improving adaptation and generalization in video quality assessment.
Related Articles
[R] Combining Identity Anchors + Permission Hierarchies achieves 100% refusal in abliterated LLMs — system prompt only, no fine-tuning
Reddit r/MachineLearning
How I Built an AI SDR Agent That Finds Leads and Writes Personalized Cold Emails
Dev.to
Complete Guide: How To Make Money With Ai
Dev.to
I Analyzed My Portfolio with AI and Scored 53/100 — Here's How I Fixed It to 85+
Dev.to
The Demethylation
Dev.to