AI Navigate

On the Reliability of Cue Conflict and Beyond

arXiv cs.CV / 3/12/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper critiques current cue-conflict and stylization-based methods for measuring shape-texture bias in neural networks, showing they can produce unstable and ambiguous bias estimates.
  • It identifies specific issues: cue invalidity, imbalance, and restricted evaluation space can distort bias measurements and confound interpretation.
  • The authors propose REFINED-BIAS, a dataset and evaluation framework that uses explicit shape/text cues and a ranking-based metric to measure cue-specific sensitivity across the full label space.
  • REFINED-BIAS enables fairer cross-model comparisons across diverse training regimes and architectures and yields clearer, more faithful conclusions about shape vs. texture bias.
  • The work resolves inconsistencies in prior cue-conflict evaluations and advances interpretable diagnosis of model biases.

Abstract

Understanding how neural networks rely on visual cues offers a human-interpretable view of their internal decision processes. The cue-conflict benchmark has been influential in probing shape-texture preference and in motivating the insight that stronger, human-like shape bias is often associated with improved in-domain performance. However, we find that the current stylization-based instantiation can yield unstable and ambiguous bias estimates. Specifically, stylization may not reliably instantiate perceptually valid and separable cues nor control their relative informativeness, ratio-based bias can obscure absolute cue sensitivity, and restricting evaluation to preselected classes can distort model predictions by ignoring the full decision space. Together, these factors can confound preference with cue validity, cue balance, and recognizability artifacts. We introduce REFINED-BIAS, an integrated dataset and evaluation framework for reliable and interpretable shape-texture bias diagnosis. REFINED-BIAS constructs balanced, human- and model- recognizable cue pairs using explicit definitions of shape and texture, and measures cue-specific sensitivity over the full label space via a ranking-based metric, enabling fairer cross-model comparisons. Across diverse training regimes and architectures, REFINED-BIAS enables fairer cross-model comparison, more faithful diagnosis of shape and texture biases, and clearer empirical conclusions, resolving inconsistencies that prior cue-conflict evaluations could not reliably disambiguate.