AI Navigate

A Hierarchical End-of-Turn Model with Primary Speaker Segmentation for Real-Time Conversational AI

arXiv cs.LG / 3/17/2026

📰 NewsModels & Research

Key Points

  • The paper introduces a real-time front-end for voice-based conversational AI that enables natural turn-taking in two-speaker scenarios by combining primary speaker segmentation with hierarchical End-of-Turn detection.
  • It robustly tracks the primary user in multi-speaker environments so downstream End-of-Turn decisions are not confounded by background conversations.
  • The system uses per-speaker features from both the primary speaker and the bot to predict the immediate conversational state and near-future states at t+10/20/30 ms with probabilistic forecasts that are aware of the partner's speech.
  • With a 1.14M-parameter model, it achieves 82% multi-class frame-level F1, 70.6% backchannel F1, 69.3% Final vs Others F1, and a median turn-detection latency of 36 ms, making it suitable for edge deployment.

Abstract

We present a real-time front-end for voice-based conversational AI to enable natural turn-taking in two-speaker scenarios by combining primary speaker segmentation with hierarchical End-of-Turn (EOT) detection. To operate robustly in multi-speaker environments, the system continuously identifies and tracks the primary user, ensuring that downstream EOT decisions are not confounded by background conversations. The tracked activity segments are fed to a hierarchical, causal EOT model that predicts the immediate conversational state by independently analyzing per-speaker speech features from both the primary speaker and the bot. Simultaneously, the model anticipates near-future states (t{+}10/20/30\,ms) through probabilistic predictions that are aware of the conversation partner's speech. Task-specific knowledge distillation compresses wav2vec~2.0 representations (768\,D) into a compact MFCC-based student (32\,D) for efficient deployment. The system achieves 82\% multi-class frame-level F1 and 70.6\% F1 on Backchannel detection, with 69.3\% F1 on a binary Final vs.\ Others task. On an end-to-end turn-detection benchmark, our model reaches 87.7\% recall vs.\ 58.9\% for Smart Turn~v3 while keeping a median detection latency of 36\,ms versus 800--1300\,ms. Despite using only 1.14\,M parameters, the proposed model matches or exceeds transformer-based baselines while substantially reducing latency and memory footprint, making it suitable for edge deployment.