FED-Bench: A Cross-Granular Benchmark for Disentangled Evaluation of Facial Expression Editing

arXiv cs.CV / 4/1/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • FED-Bench is proposed as a new facial expression image-editing benchmark designed to support fine-grained control while preserving identity and background, addressing limitations in prior benchmarks.
  • The benchmark includes 747 original–instruction–ground-truth triplets built via a cascaded, scalable pipeline, enabling more rigorous and instruction-accurate evaluation.
  • A new evaluation protocol called FED-Score separates scoring into three dimensions—Alignment (instruction following), Fidelity (image quality and identity preservation), and Relative Expression Gain (expression change magnitude)—to reduce systemic metric biases.
  • Experiments across 18 editing models show they typically cannot achieve high fidelity and accurate expression manipulation simultaneously, with fine-grained instruction following identified as the main bottleneck.
  • The authors also plan to release code and provide a 20k+ in-the-wild training set for facial expression editing, demonstrating that fine-tuning a baseline model can yield significant gains.

Abstract

Facial expression image editing requires fine-grained control to strictly preserve human identity and background while precisely manipulating expression. However, existing editing benchmarks primarily focus on general scenarios, lacking high-quality facial images and corresponding editing instructions. Furthermore, current evaluation metrics exhibit systemic biases in this task, often favoring lazy editing or overfit editing. To bridge these gaps, we propose FED-Bench, a comprehensive benchmark featuring rigorous testing and an accurate evaluation suite. First, we carefully construct a benchmark of 747 triplets through a cascaded and scalable pipeline, each comprising an original image, an editing instruction, and a ground-truth image for precise evaluation. Second, we introduce FED-Score, a cross-granularity evaluation protocol that disentangles assessment into three dimensions: Alignment for verifying instruction following, Fidelity for testing image quality and identity preservation, and Relative Expression Gain for quantifying the magnitude of expression changes, effectively mitigating the aforementioned evaluation biases. Third, we benchmark 18 image editing models, revealing that current approaches struggle to simultaneously achieve high fidelity and accurate expression manipulation, with fine-grained instruction following identified as the primary bottleneck. Finally, leveraging the scalable characteristic of introduced benchmark engine, we provide a 20k+ in-the-wild facial training set and demonstrate its effectiveness by fine-tuning a baseline model that achieves significant performance gains. Our benchmark and related code will be made publicly open soon.