Generative Score Inference for Multimodal Data

arXiv cs.AI / 3/30/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes Generative Score Inference (GSI), a flexible framework for statistically valid uncertainty quantification in supervised learning with multimodal inputs like images and text.
  • GSI approximates conditional score distributions by using synthetic samples generated from deep generative models, aiming to avoid restrictive assumptions common in existing uncertainty methods.
  • The authors validate GSI on two settings: hallucination detection in large language models and uncertainty estimation for image captioning.
  • Results show state-of-the-art performance for hallucination detection and robust predictive uncertainty for image captioning, with gains that improve as the underlying generative model quality increases.
  • The work positions GSI as a broadly applicable inference approach that can improve decision reliability and trustworthiness in multimodal learning systems.

Abstract

Accurate uncertainty quantification is crucial for making reliable decisions in various supervised learning scenarios, particularly when dealing with complex, multimodal data such as images and text. Current approaches often face notable limitations, including rigid assumptions and limited generalizability, constraining their effectiveness across diverse supervised learning tasks. To overcome these limitations, we introduce Generative Score Inference (GSI), a flexible inference framework capable of constructing statistically valid and informative prediction and confidence sets across a wide range of multimodal learning problems. GSI utilizes synthetic samples generated by deep generative models to approximate conditional score distributions, facilitating precise uncertainty quantification without imposing restrictive assumptions about the data or tasks. We empirically validate GSI's capabilities through two representative scenarios: hallucination detection in large language models and uncertainty estimation in image captioning. Our method achieves state-of-the-art performance in hallucination detection and robust predictive uncertainty in image captioning, and its performance is positively influenced by the quality of the underlying generative model. These findings underscore the potential of GSI as a versatile inference framework, significantly enhancing uncertainty quantification and trustworthiness in multimodal learning.