AI Navigate

WeNLEX: Weakly Supervised Natural Language Explanations for Multilabel Chest X-ray Classification

arXiv cs.CV / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • WeNLEX introduces a weakly supervised approach to generate natural language explanations for multilabel chest X-ray classification, reducing the need for large annotated explanation datasets.
  • Faithfulness is enforced by generating images from explanations and matching them to the original images in the black-box model's feature space.
  • Plausibility is ensured through distribution alignment with a small database of clinician-annotated explanations, enabling credible explanations with as few as 5 ground-truth examples per diagnosis.
  • The method works in both post-hoc and in-model settings, and when trained jointly, it improves the classifier AUC by 2.21%, showing interpretability can boost downstream performance.
  • Explanations can be adapted to different audiences by changing the explanation database, demonstrated with a layman version for non-medical users.

Abstract

Natural language explanations provide an inherently human-understandable way to explain black-box models, closely reflecting how radiologists convey their diagnoses in textual reports. Most works explicitly supervise the explanation generation process using datasets annotated with explanations. Thus, though plausible, the generated explanations are not faithful to the model's reasoning. In this work, we propose WeNLEX, a weakly supervised model for the generation of natural language explanations for multilabel chest X-ray classification. Faithfulness is ensured by matching images generated from their corresponding natural language explanations with original images, in the black-box model's feature space. Plausibility is maintained via distribution alignment with a small database of clinician-annotated explanations. We empirically demonstrate, through extensive validation on multiple metrics to assess faithfulness, simulatability, diversity, and plausibility, that WeNLEX is able to produce faithful and plausible explanations, using as little as 5 ground-truth explanations per diagnosis. Furthermore, WeNLEX can operate in both post-hoc and in-model settings. In the latter, i.e., when the multilabel classifier is trained together with the rest of the network, WeNLEX improves the classification AUC of the standalone classifier by 2.21%, thus showing that adding interpretability to the training process can actually increase the downstream task performance. Additionally, simply by changing the database, WeNLEX explanations are adaptable to any target audience, and we showcase this flexibility by training a layman version of WeNLEX, where explanations are simplified for non-medical users.