AI Navigate

UT-ACA: Uncertainty-Triggered Adaptive Context Allocation for Long-Context Inference

arXiv cs.CL / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Long-context inference in large language models suffers from attention dilution and non-uniform token-level contextual demands that fixed context budgets cannot accommodate.
  • UT-ACA is an inference-time framework that dynamically adjusts the context window according to token-wise uncertainty during decoding.
  • It learns an uncertainty detector by combining semantic embeddings with logit-based confidence and accounting for uncertainty accumulation across decoding steps.
  • When evidence is insufficient, UT-ACA can roll back, expand the context window, and regenerate the token with additional support, reducing average context usage.
  • Experiments show UT-ACA substantially reduces average context usage while preserving generation quality in long-context settings.

Abstract

Long-context inference remains challenging for large language models due to attention dilution and out-of-distribution degradation. Context selection mitigates this limitation by attending to a subset of key-value cache entries, yet most methods allocate a fixed context budget throughout decoding despite highly non-uniform token-level contextual demands. To address this issue, we propose Uncertainty-Triggered Adaptive Context Allocation (UT-ACA), an inference-time framework that dynamically adjusts the context window based on token-wise uncertainty. UT-ACA learns an uncertainty detector that combines semantic embeddings with logit-based confidence while accounting for uncertainty accumulation across decoding steps. When insufficient evidence is indicated, UT-ACA selectively rolls back, expands the context window, and regenerates the token with additional support. Experiments show that UT-ACA substantially reduces average context usage while preserving generation quality in long-context settings.