Right-to-Act: A Pre-Execution Non-Compensatory Decision Protocol for AI Systems

arXiv cs.AI / 4/28/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes the “Right-to-Act” protocol, a deterministic pre-execution decision layer for AI systems that decide whether an AI output is allowed to be carried out in the real world.
  • It distinguishes non-compensatory regimes from compensatory ones by enforcing strict structural constraints: if any required condition is not met, execution is halted or deferred rather than overridden by high-confidence signals.
  • The authors formalize a pre-execution legitimacy boundary and show, via scenario-based case studies, that the same AI output can produce different outcomes depending on whether the protocol permits execution.
  • The approach reframes AI control away from optimizing decisions and toward governing whether decisions are admissible, aiming to preserve reversibility and prevent premature or irreversible actions regardless of model architecture or training methods.

Abstract

Current AI systems increasingly operate in contexts where their outputs directly trigger real-world actions. Most existing approaches to AI safety, risk management, and governance focus on post-hoc validation, probabilistic risk estimation, or certification of model behavior. However, these approaches implicitly assume that once a decision is produced, it is eligible for execution. In this work, we introduce the Right-to-Act protocol, a deterministic, non-compensatory pre-execution decision layer that evaluates whether an AI-generated decision is permitted to be realized at all. Unlike compensatory systems, where high-confidence signals can override failed conditions, the proposed framework enforces strict structural constraints: if any required condition is unmet, execution is halted or deferred. We formalize the distinction between compensatory and non-compensatory decision regimes and define a pre-execution legitimacy boundary. Through a scenario-based case study, we demonstrate how identical AI outputs can lead to divergent outcomes when evaluated under a Right-to-Act protocol, preserving reversibility and preventing premature or irreversible actions. The proposed approach reframes AI control from optimizing decisions to governing their admissibility, introducing a protocol-level abstraction that operates independently of model architecture or training methodology.