FAIR_XAI: Improving Multimodal Foundation Model Fairness via Explainability for Wellbeing Assessment

arXiv cs.AI / 4/28/2026

📰 NewsModels & Research

Key Points

  • The study examines how multimodal vision-language foundation models (VLMs) perform in wellbeing and depression assessment across both laboratory and naturalistic datasets, with particular attention to diagnostic reliability and demographic fairness.
  • Results show large performance variation by environment and model architecture, with Phi3.5-Vision reaching 80.4% accuracy on E-DAIC versus Qwen2-VL at 33.9%, and both models tending to over-predict depression on AFAR-BSFT.
  • Bias patterns differ by model: Qwen2-VL exhibits higher gender disparities, while Phi-3.5-Vision shows stronger racial bias across the evaluated settings.
  • XAI-based fairness interventions produced mixed outcomes—prompting fairness achieved perfect equal opportunity for Qwen2-VL on E-DAIC but at a severe accuracy cost, while explainability interventions on AFAR-BSFT improved procedural consistency without ensuring outcome fairness and sometimes amplified racial bias.
  • The authors conclude there is a persistent gap between procedural transparency (explainability) and equitable outcomes, recommending future fairness approaches jointly optimize predictive accuracy, demographic parity, and cross-domain generalization.

Abstract

In recent years, the integration of multimodal machine learning in wellbeing assessment has offered transformative potential for monitoring mental health. However, with the rapid advancement of Vision-Language Models (VLMs), their deployment in clinical settings has raised concerns due to their lack of transparency and potential for bias. While previous research has explored the intersection of fairness and Explainable AI (XAI), its application to VLMs for wellbeing assessment and depression prediction remains under-explored. This work investigates VLM performance across laboratory (AFAR-BSFT) and naturalistic (E-DAIC) datasets, focusing on diagnostic reliability and demographic fairness. Performance varied substantially across environments and architectures; Phi3.5-Vision achieved 80.4% accuracy on E-DAIC, while Qwen2-VL struggled at 33.9%. Additionally, both models demonstrated a tendency to over-predict depression on AFAR-BSFT. Although bias existed across both architectures, Qwen2-VL showed higher gender disparities, while Phi-3.5-Vision exhibited more racial bias. Our XAI intervention framework yielded mixed results; fairness prompting achieved perfect equal opportunity for Qwen2-VL at a severe accuracy cost on E-DAIC. On AFAR-BSFT, explainability-based interventions improved procedural consistency but did not guarantee outcome fairness, sometimes amplifying racial bias. These results highlight a persistent gap between procedural transparency and equitable outcomes. We analyse these findings and consolidate concrete recommendations for addressing them, emphasising that future fairness interventions must jointly optimise predictive accuracy, demographic parity, and cross-domain generalisation.