Belief-Guided Inference Control for Large Language Model Services via Verifiable Observations

arXiv cs.AI / 5/1/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses reliability in black-box LLM services where true inference quality is only partially observable at decision time, creating a sequential, budget-constrained choice per request.
  • It proposes Veroic (Verifiable Observations for Risk-aware Inference Control), framing request-time routing as a partially observable Markov decision process that accounts for partial observability and compute-budget coupling.
  • Veroic builds a lightweight, verifiable observation channel from input-output pairs by aggregating heterogeneous quality signals into a belief state over latent response reliability.
  • Using this belief state, a budget-aware policy decides whether to return a default low-cost response or trigger a higher-cost inference path to improve quality.
  • Experiments across multiple tasks show better quality–cost trade-offs, improved risk estimation/calibration, and more robust long-horizon inference control versus baseline methods.

Abstract

In black-box large language model (LLM) services, response reliability is often only partially observable at decision time, while stronger inference pathways incur substantial computational cost, inducing a budgeted sequential decision problem: for each request, the system should decide whether the default low-cost response is sufficiently reliable or whether additional computation should be allocated to improve response quality. In this paper, we propose \textbf{Ver}ifiable \textbf{O}bservations for Risk-aware \textbf{I}nference \textbf{C}ontrol (\textsc{Veroic}), a framework for adaptive inference control in black-box LLM settings, which formulates request-time control as a \textit{partially observable Markov decision process} to capture partial observability and sequential budget coupling. It constructs a lightweight verifiable observation channel from the input-output pair by aggregating heterogeneous quality signals into a belief state over latent response reliability, which is then used by a budget-aware policy to decide whether to return the default output or trigger a higher-cost inference pathway. Experiments on diverse tasks show that \textsc{Veroic} achieves improved quality-cost trade-offs, stronger risk estimation and calibration, and more robust long-horizon inference control than competitive baselines.