When Less is Enough: Efficient Inference via Collaborative Reasoning

arXiv cs.LG / 5/5/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper introduces DUET (Dual-model Efficient Two-stage inference), a framework that combines a capable model with a lightweight model to improve inference efficiency.
  • DUET splits inference into two stages: the capable model generates a reasoning signal, and the lightweight model uses that signal to produce the final answer.
  • A key contribution is a length-penalized joint training objective that encourages the capable model to transmit only information sufficient for the lightweight model, reducing unnecessary token generation.
  • Experiments indicate DUET preserves strong reasoning performance while cutting inference cost, saving up to 60% of the large model’s output tokens on benchmarks such as AIME and GPQA.
  • Overall, the approach targets lower-cost reasoning by delegating non-reasoning components to a smaller model without sacrificing task accuracy.

Abstract

In this work, we introduce DUET (Dual-model Efficient Two-stage inference), a collaborative inference framework in which a capable model and a lightweight model work together to solve a task. Relying on a single large model to perform end-to-end reasoning and prediction often incurs substantial inference cost. In contrast, DUET decomposes inference into two stages: the capable model produces a reasoning signal, and the lightweight model interprets this signal to generate the final answer, allowing reasoning-intensive computation to be handled by the capable model while non-reasoning-intensive components are delegated to the lightweight model without sacrificing task performance. To achieve this objective, we propose a length-penalized joint training objective that encourages the capable model to transmit only the information that is sufficient for the lightweight model to solve the task. As a result, DUET maintains strong reasoning performance with substantially lower inference cost than end-to-end inference using a large model alone, saving up to 60% of the large model's output tokens on challenging reasoning benchmarks, including AIME and GPQA.