Raising the Ceiling: Better Empirical Fixation Densities for Saliency Benchmarking
arXiv cs.CV / 5/6/2026
📰 NewsModels & Research
Key Points
- Empirical fixation density maps are central to saliency benchmarking, influencing leaderboard outcomes and scientific claims about human visual attention, but the commonly used KDE estimation method has remained largely unchanged for decades.
- The paper introduces a mixture model that improves per-image fixation density estimation by combining an adaptive-bandwidth KDE (Abramson-style), center-bias and uniform components, and a strong saliency model, with parameters optimized per image via leave-one-subject-out cross-validation.
- Experiments across multiple benchmarks show better interobserver consistency, including median per-image log-likelihood gains of 5–15% and AUC improvements of up to 2 percentage points.
- The largest improvements occur on the most critical images for failure-case analysis (over 25% gains), and the authors use the refined densities to reveal remaining failure cases in state-of-the-art saliency models, indicating continued room for model improvements.
- Overall, the work argues that fixation densities should be treated as continually improving estimates rather than static ground truth.
Related Articles

Google AI Releases Multi-Token Prediction (MTP) Drafters for Gemma 4: Delivering Up to 3x Faster Inference Without Quality Loss
MarkTechPost
Solidity LM surpasses Opus
Reddit r/LocalLLaMA

Quality comparison between Qwen 3.6 27B quantizations (BF16, Q8_0, Q6_K, Q5_K_XL, Q4_K_XL, IQ4_XS, IQ3_XXS,...)
Reddit r/LocalLLaMA

We measured the real cost of running a GPT-5.4 chatbot on live websites
Reddit r/artificial

AI ecosystems in China and US grow apart amid tech war
SCMP Tech