Proactive Dialogue Model with Intent Prediction

arXiv cs.CL / 5/1/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that conventional dialogue models are reactive and may produce redundant responses when multiple intents are present.
  • It proposes a lightweight intent-transition prior built from dialogue data and injected into the system prompt at inference time to make responses more proactive.
  • The approach uses a Temporal Bayesian Network (T-BN) trained on per-turn intent annotations from MultiWOZ 2.2 to predict likely intent transitions.
  • Experimental results report strong retrieval metrics (Recall@5 = 0.787, MRR = 0.576) and improved dialogue efficiency in replay experiments (Coverage AUC rising from 0.742 to 0.856, and fewer turns to reach 75% coverage from 3.95 to 2.73).
  • The method improves dialogue behavior without changing the underlying language model, suggesting a plug-in guidance strategy for existing systems.

Abstract

Dialogue models are inherently reactive, responding to the current user turn without anticipating upcoming intents, which leads to redundant interactions in multi-intent settings. We address this limitation by introducing a lightweight intent-transition prior derived from dialogue data and injected into the system prompt at inference time. We instantiate this prior using a Temporal Bayesian Network (T-BN) trained on per-turn intent annotations in MultiWOZ 2.2. The T-BN achieves Recall@5 = 0.787 and MRR = 0.576 on 1,071 held-out USER-turn pairs. In a ground-truth replay over 200 dialogues, BN-guided generation improves Coverage AUC from 0.742 to 0.856 and reduces the number of turns required to reach 75% intent coverage from 3.95 to 2.73. These results show that lightweight intent-transition guidance enables more proactive and efficient dialogue behavior without modifying the underlying language model.