AI Navigate

SocialOmni: Benchmarking Audio-Visual Social Interactivity in Omni Models

arXiv cs.AI / 3/18/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces SocialOmni, a new benchmark to evaluate social interactivity in omni-modal models across speaker identification, interruption timing, and natural interruption generation.
  • It comprises 2,000 perception samples and a 209-instance diagnostic set with strict temporal and contextual constraints, plus controlled audio-visual inconsistency scenarios to test robustness.
  • Evaluations of 12 leading omni-modal LLMs reveal substantial variance in social-interaction capabilities and a decoupling between perceptual accuracy and interruption quality.
  • The results indicate that understanding-centric metrics alone are insufficient to characterize conversational social competence and highlight the need to bridge perception and interaction in future OLMs.
  • The diagnostics from SocialOmni offer actionable signals to guide next-step research and development toward more integrated perception-interaction in omni-modal models.

Abstract

Omni-modal large language models (OLMs) redefine human-machine interaction by natively integrating audio, vision, and text. However, existing OLM benchmarks remain anchored to static, accuracy-centric tasks, leaving a critical gap in assessing social interactivity, the fundamental capacity to navigate dynamic cues in natural dialogues. To this end, we propose SocialOmni, a comprehensive benchmark that operationalizes the evaluation of this conversational interactivity across three core dimensions: (i) speaker separation and identification (who is speaking), (ii) interruption timing control (when to interject), and (iii) natural interruption generation (how to phrase the interruption). SocialOmni features 2,000 perception samples and a quality-controlled diagnostic set of 209 interaction-generation instances with strict temporal and contextual constraints, complemented by controlled audio-visual inconsistency scenarios to test model robustness. We benchmarked 12 leading OLMs, which uncovers significant variance in their social-interaction capabilities across models. Furthermore, our analysis reveals a pronounced decoupling between a model's perceptual accuracy and its ability to generate contextually appropriate interruptions, indicating that understanding-centric metrics alone are insufficient to characterize conversational social competence. More encouragingly, these diagnostics from SocialOmni yield actionable signals for bridging the perception-interaction divide in future OLMs.