From Reactive to Proactive: Assessing the Proactivity of Voice Agents via ProVoice-Bench

arXiv cs.AI / 4/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that LLM voice agents are moving from reactive, text-only interactions toward proactive, multimodal engagement, but current benchmarks largely fail to measure proactive behavior.
  • It introduces ProVoice-Bench, a new evaluation framework with four tasks specifically aimed at proactive voice agents, covering intervention and monitoring complexities.
  • Using a multi-stage data synthesis pipeline, the authors curated 1,182 high-quality test samples to support rigorous assessment.
  • Experiments on state-of-the-art multimodal LLMs show a sizable performance gap, especially in over-triggering behavior and in their reasoning abilities for proactive actions.
  • The results are presented as evidence of current model limitations and as guidance for building more natural, context-aware proactive agents.

Abstract

Recent advancements in LLM agents are gradually shifting from reactive, text-based paradigms toward proactive, multimodal interaction. However, existing benchmarks primarily focus on reactive responses, overlooking the complexities of proactive intervention and monitoring. To bridge this gap, we introduce ProVoice-Bench, the first evaluation framework specifically designed for proactive voice agents, featuring four novel tasks. By leveraging a multi-stage data synthesis pipeline, we curate 1,182 high-quality samples for rigorous testing. Our evaluation of state-of-the-art Multimodal LLMs reveals a significant performance gap, particularly regarding over-triggering and reasoning capabilities. These findings highlight the limitations of current models and offer a roadmap for developing more natural, context-aware proactive agents.