Governed Reasoning for Institutional AI
arXiv cs.AI / 4/14/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that institutional decision-making (e.g., compliance, clinical triage, prior authorization appeals) needs a different AI architecture than general-purpose conversational agents because agents can make “silent” errors without triggering human review.
- It proposes “Cognitive Core,” a governed decision substrate built from nine typed cognitive primitives and a four-tier governance model where human review is required for execution, not applied as an after-the-fact check.
- Cognitive Core includes an endogenous, tamper-evident SHA-256 hash-chain audit ledger to support trustworthy accountability, and a demand-driven delegation design for both declared and autonomously reasoned epistemic sequences.
- In benchmarks on an 11-case prior authorization appeal dataset, Cognitive Core reaches 91% accuracy, outperforming ReAct (55%) and Plan-and-Solve (45%), and it produced zero silent errors versus 5–6 for the baselines.
- The authors introduce “governability” as a key evaluation metric—measuring how reliably a system knows when it should refrain from autonomous action—and claim new institutional domains can be deployed via configuration (YAML) rather than engineering.
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.


