FairLLaVA: Fairness-Aware Parameter-Efficient Fine-Tuning for Large Vision-Language Assistants

arXiv cs.AI / 3/30/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • FairLLaVAは、画像とテキストを扱うマルチモーダルLLM(MLLM)がデモグラフィック集団ごとに不均一な性能を示し得るという公平性リスクに対処する、パラメータ効率の高い微調整手法を提案しています。
  • 目標となる属性間の相互情報量を最小化することで、モデル表現をデモグラフィック非依存に正則化し、全体性能を損なわずに集団間の差を緩和する方針です。
  • FairLLaVAは低ランクアダプタによる軽量な「プラグイン」として既存アーキテクチャへ組み込み可能で、視覚指示追従の公平性改善を比較的コスト低く実現します。
  • 大規模な胸部レントゲン所見生成と皮膚鏡VQAのベンチマークで、集団間格差の一貫した低減に加え、エクイティ尺度での臨床性能や自然言語生成品質の向上も確認したと報告しています。
  • コードがGitHubで公開され、医療画像など複数モダリティでの適用可能性を示しています。

Abstract

While powerful in image-conditioned generation, multimodal large language models (MLLMs) can display uneven performance across demographic groups, highlighting fairness risks. In safety-critical clinical settings, such disparities risk producing unequal diagnostic narratives and eroding trust in AI-assisted decision-making. While fairness has been studied extensively in vision-only and language-only models, its impact on MLLMs remains largely underexplored. To address these biases, we introduce FairLLaVA, a parameter-efficient fine-tuning method that mitigates group disparities in visual instruction tuning without compromising overall performance. By minimizing the mutual information between target attributes, FairLLaVA regularizes the model's representations to be demographic-invariant. The method can be incorporated as a lightweight plug-in, maintaining efficiency with low-rank adapter fine-tuning, and provides an architecture-agnostic approach to fair visual instruction following. Extensive experiments on large-scale chest radiology report generation and dermoscopy visual question answering benchmarks show that FairLLaVA consistently reduces inter-group disparities while improving both equity-scaled clinical performance and natural language generation quality across diverse medical imaging modalities. Code can be accessed at https://github.com/bhosalems/FairLLaVA.