AI to Learn 2.0: A Deliverable-Oriented Governance Framework and Maturity Rubric for Opaque AI in Learning-Intensive Domains

arXiv cs.AI / 4/23/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that existing AI governance frameworks struggle with “proxy failure” in learning-intensive domains, where AI-polished outputs may not reflect the intended evidence of human understanding or transfer ability.
  • It proposes “AI to Learn 2.0,” a deliverable-oriented governance framework that focuses on the final packaged deliverable rather than element-wise novelty.
  • The framework separates artifact residual from capability residual and operationalizes this via a five-part deliverable package, a seven-dimension maturity rubric, and gate thresholds on critical dimensions.
  • It allows opaque AI during early stages (exploration, drafting, hypothesis generation, workflow design) but requires released deliverables to be usable, auditable, transferable, and justifiable without relying on the original large language model or cloud API.
  • Through worked scoring across several contrastive case studies, it demonstrates how to distinguish mere substitution-by-polish from bounded, auditable, handoff-ready AI-assisted workflows for structured third-party review.

Abstract

Generative AI is entering research, education, and professional work faster than current governance frameworks can specify how AI-assisted outputs should be judged in learning-intensive settings. The central problem is proxy failure: a polished artifact can be useful while no longer serving as credible evidence of the human understanding, judgment, or transfer ability that the work is supposed to cultivate or certify. This paper proposes AI to Learn 2.0, a deliverable-oriented governance framework for AI-assisted work. Rather than claiming element-wise novelty, it reorganizes adjacent ideas around the final deliverable package, distinguishes artifact residual from capability residual, and operationalizes the result through a five-part package, a seven-dimension maturity rubric, gate thresholds on critical dimensions, and a companion capability-evidence ladder. AI to Learn 2.0 allows opaque AI during exploration, drafting, hypothesis generation, and workflow design, but requires that the released deliverable be usable, auditable, transferable, and justifiable without the original large language model or cloud API. In learning-intensive contexts, it additionally requires context-appropriate human-attributable evidence of explanation or transfer. Worked scoring across contrastive cases, including coursework substitution, a symbolic-regression governance contrast, teacher-audited national-exam practice forms, and a self-hosted lecture-to-quiz pipeline with deterministic quality control, shows how the framework separates polished substitution workflows from bounded, auditable, and handoff-ready AI-assisted workflows. AI to Learn 2.0 is proposed as a governance instrument for structured third-party review where capability preservation, accountability, and validity boundaries matter.