Support Sufficiency as Consequence-Sensitive Compression in Belief Arbitration

arXiv cs.AI / 4/21/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that in belief arbitration, downstream control cannot rely only on compressed selected content and scalar confidence; what must be retained is a consequence-sensitive decision problem.
  • It introduces a recurrent arbitration architecture that uses active constraint fields to shape a hypothesis geometry over candidates and then compresses that geometry into a support-aware control state.
  • A bounded objective formalizes the tradeoff: retaining too little support causes misrouting of verification/abstention/recovery, while retaining too much fragments learning and harms adaptation.
  • Minimal repeated-interaction simulations show ordered performance patterns across controller designs, with adaptive support-resolution outperforming fixed-resolution strategies under cumulative utility.
  • The work reframes “support sufficiency” as a dynamic compression criterion that should adapt across inference-action cycles as the consequence landscape changes.

Abstract

When a system commits to a hypothesis, much of the evidential structure behind that commitment is lost to compression. Standard accounts assume that selected content and scalar confidence suffice for downstream control. This paper argues that they do not, and that determining what must survive compression is itself a consequence-sensitive problem. We develop a recurrent arbitration architecture in which active constraint fields jointly determine a hypothesis geometry over candidates. Rather than carrying that geometry forward in full, the system compresses it into a support-aware control state whose resolution is regulated by current consequence geometry, arbitration memory, and resource constraints. A bounded objective formalizes the tradeoff. Too little retained support collapses policy-relevant distinctions, producing controllers that select content adequately while misrouting verification, abstention, and recovery. Too much retained support fragments learning across overly fine contexts, degrading adaptation even as discrimination improves. These failure modes yield ordered controller predictions confirmed by a minimal repeated-interaction simulation. Adaptive controllers that regulate support resolution outperform all fixed-resolution controllers in cumulative utility. Agile adaptive control outperforms sluggish adaptive control. Fixed high-resolution control achieves the best commitment accuracy but still trails adaptive controllers because resource cost and learning fragmentation offset the gains from richer retention. Support sufficiency should be understood not as a static representational threshold, but as a dynamic compression criterion. Robust arbitration depends on preserving the smallest support structure adequate for policy under the current consequence landscape, and on regulating that structure as conditions change across repeated cycles of inference and action.