ProactiveBench: Benchmarking Proactiveness in Multimodal Large Language Models

arXiv cs.CV / 3/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • ProactiveBench is introduced as a benchmark built from seven repurposed datasets to test proactiveness in multimodal large language models across tasks such as recognizing occluded objects, enhancing image quality, and interpreting coarse sketches.
  • The evaluation of 22 MLLMs shows that current models generally lack proactiveness, and proactiveness does not correlate with model capacity.
  • The study finds that hinting at proactiveness yields only marginal gains, and conversation histories and in-context learning introduce negative biases that hinder performance.
  • A simple reinforcement learning-based fine-tuning strategy shows that proactiveness can be learned and can generalize to unseen scenarios, with ProactiveBench publicly released to spur development of proactive multimodal models.

Abstract

Effective collaboration begins with knowing when to ask for help. For example, when trying to identify an occluded object, a human would ask someone to remove the obstruction. Can MLLMs exhibit a similar "proactive" behavior by requesting simple user interventions? To investigate this, we introduce ProactiveBench, a benchmark built from seven repurposed datasets that tests proactiveness across different tasks such as recognizing occluded objects, enhancing image quality, and interpreting coarse sketches. We evaluate 22 MLLMs on ProactiveBench, showing that (i) they generally lack proactiveness; (ii) proactiveness does not correlate with model capacity; (iii) "hinting" at proactiveness yields only marginal gains. Surprisingly, we found that conversation histories and in-context learning introduce negative biases, hindering performance. Finally, we explore a simple fine-tuning strategy based on reinforcement learning: its results suggest that proactiveness can be learned, even generalizing to unseen scenarios. We publicly release ProactiveBench as a first step toward building proactive multimodal models.