AI Navigate

CineSRD: Leveraging Visual, Acoustic, and Linguistic Cues for Open-World Visual Media Speaker Diarization

arXiv cs.CV / 3/19/2026

📰 NewsModels & Research

Key Points

  • The paper introduces CineSRD, a unified multimodal framework that uses visual, acoustic, and linguistic cues from video, speech, and subtitles to diarize speakers in open-world visual media.
  • CineSRD performs visual anchor clustering to register initial speakers and then uses an audio language model to detect speaker turns, refining annotations and addressing off-screen speakers.
  • The authors release a dedicated speaker diarization benchmark for visual media that includes Chinese and English programs to evaluate long-form, multi-speaker content.
  • Experimental results show CineSRD achieves superior performance on the proposed benchmark and competitive results on conventional datasets, demonstrating robustness and generalizability in open-world settings.

Abstract

Traditional speaker diarization systems have primarily focused on constrained scenarios such as meetings and interviews, where the number of speakers is limited and acoustic conditions are relatively clean. To explore open-world speaker diarization, we extend this task to the visual media domain, encompassing complex audiovisual programs such as films and TV series. This new setting introduces several challenges, including long-form video understanding, a large number of speakers, cross-modal asynchrony between audio and visual cues, and uncontrolled in-the-wild variability. To address these challenges, we propose Cinematic Speaker Registration & Diarization (CineSRD), a unified multimodal framework that leverages visual, acoustic, and linguistic cues from video, speech, and subtitles for speaker annotation. CineSRD first performs visual anchor clustering to register initial speakers and then integrates an audio language model for speaker turn detection, refining annotations and supplementing unregistered off-screen speakers. Furthermore, we construct and release a dedicated speaker diarization benchmark for visual media that includes Chinese and English programs. Experimental results demonstrate that CineSRD achieves superior performance on the proposed benchmark and competitive results on conventional datasets, validating its robustness and generalizability in open-world visual media settings.