AI Navigate

VTEdit-Bench: A Comprehensive Benchmark for Multi-Reference Image Editing Models in Virtual Try-On

arXiv cs.CV / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The VTEdit-Bench benchmark was introduced to evaluate universal multi-reference image editing models in virtual try-on (VTON) scenarios.
  • VTEdit-Bench contains 24,220 test image pairs across five representative VTON tasks to enable systematic analysis of robustness and generalization.
  • The authors also propose VTEdit-QA, a reference-aware VLM-based evaluator that assesses model consistency, cloth consistency, and overall image quality.
  • The study compares eight universal editing models with seven specialized VTON models, finding universal editors competitive on conventional tasks and capable of more stable generalization to harder scenarios, but challenged by complex multi-cloth conditioning.
  • The results highlight remaining difficulties with complex reference configurations, indicating avenues for improving universal VTON methods.

Abstract

As virtual try-on (VTON) continues to advance, a growing number of real-world scenarios have emerged, pushing beyond the ability of the existing specialized VTON models. Meanwhile, universal multi-reference image editing models have progressed rapidly and exhibit strong generalization in visual editing, suggesting a promising route toward more flexible VTON systems. However, despite their strong capabilities, the strengths and limitations of universal editors for VTON remain insufficiently explored due to the lack of systematic evaluation benchmarks. To address this gap, we introduce VTEdit-Bench, a comprehensive benchmark designed to evaluate universal multi-reference image editing models across various realistic VTON scenarios. VTEdit-Bench contains 24,220 test image pairs spanning five representative VTON tasks with progressively increasing complexity, enabling systematic analysis of robustness and generalization. We further propose VTEdit-QA, a reference-aware VLM-based evaluator that assesses VTON performance from three key aspects: model consistency, cloth consistency, and overall image quality. Through this framework, we systematically evaluate eight universal editing models and compare them with seven specialized VTON models. Results show that top universal editors are competitive on conventional tasks and generalize more stably to harder scenarios, but remain challenged by complex reference configurations, particularly multi-cloth conditioning.