Reasoning over Object Descriptions Improves Coreference Resolution in Task-Based Dialogue Systems

arXiv cs.CL / 5/1/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies coreference resolution in task-based dialogue systems, focusing on the difficulty of linking object references in visually grounded and metadata-rich environments.
  • It proposes a unimodal, test-time reasoning method that prompts LLMs to use dialogue history together with detailed object metadata to improve coreference resolution.
  • Experiments on the SIMMC 2.1 dataset show that LLMs can produce step-by-step reasoning that aligns dialogue context with the objects in the scene.
  • The approach demonstrates strong generalization in few-shot settings, including performance gains on unseen scenarios and novel objects compared with encoder-based supervised methods in cross-domain tests.

Abstract

Task-based dialogue systems assist users in achieving specific goals, such as executing actions or retrieving information, through natural language interactions. Accurate coreference resolution is essential, as it involves identifying object references within the dialogue - a task that becomes increasingly challenging in visually grounded environments characterized by complex scenes and diverse object metadata. However, coreference resolution in task-based dialogue remains limited by poor generalization across domains and heavy reliance on supervised models that often overfit to dataset-specific artifacts. In this work, we propose a unimodal test-time reasoning approach that enables large language models (LLMs) to reason over detailed object metadata and dialogue history to improve coreference resolution. Empirical results on the SIMMC 2.1 dataset demonstrate that LLMs can generate step-by-step reasoning processes that effectively align dialogue context with objects present in the scene. Extensive experiments highlight the models' ability to link conversations and objects accurately. Moreover, we show that test-time reasoning under few-shot settings generalizes effectively to unseen scenarios and novel objects, outperforming encoder-based supervised methods in cross-domain evaluations. These findings underscore the critical role of structured metadata and careful prompt engineering in enhancing the robustness and generalization of task-oriented dialogue systems.