PolySLGen: Online Multimodal Speaking-Listening Reaction Generation in Polyadic Interaction

arXiv cs.CV / 4/10/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • PolySLGen is an arXiv-announced online framework for generating human-like multimodal reaction behaviors (speech plus body motion and speaking-state) for a target participant within polyadic group interactions.
  • The approach takes past conversation history and motion from all participants to produce temporally coherent future reactions, explicitly addressing limitations of prior work that focused on single-modality or speaking-only dyadic settings.
  • To better capture group dynamics, PolySLGen introduces a pose-fusion module and a social-cue encoder that jointly aggregate motion and social signals across the group.
  • Experiments with quantitative and qualitative evaluations indicate PolySLGen improves motion quality, motion–speech alignment, speaking-state prediction, and overall human-perceived realism compared with adapted and state-of-the-art baselines.

Abstract

Human-like multimodal reaction generation is essential for natural group interactions between humans and embodied AI. However, existing approaches are limited to single-modality or speaking-only responses in dyadic interactions, making them unsuitable for realistic social scenarios. Many also overlook nonverbal cues and complex dynamics of polyadic interactions, both critical for engagement and conversational coherence. In this work, we present PolySLGen, an online framework for Polyadic multimodal Speaking and Listening reaction Generation. Given past conversation and motion from all participants, PolySLGen generates a future speaking or listening reaction for a target participant, including speech, body motion, and speaking state score. To model group interactions effectively, we propose a pose fusion module and a social cue encoder that jointly aggregate motion and social signals from the group. Extensive experiments, along with quantitative and qualitative evaluations, show that PolySLGen produces contextually appropriate and temporally coherent multi-modal reactions, outperforming several adapted and state-of-the-art baselines in motion quality, motion-speech alignment, speaking state prediction, and human-perceived realism.