We built a governance layer for AI-assisted development (with runtime validation and real system)

Dev.to / 3/28/2026

💬 OpinionDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • The article describes Janus, a governance layer for AI-assisted development that focuses on validating protocol conformance and producing governance evidence rather than measuring only model performance.
  • It highlights progress in a second paper that turns a theoretical governance model into measurable components including evidence-based scoring, omission detection, human authority boundaries, and deterministic reconstruction.
  • The team provides benchmark metrics (ECR, GVL, PVDR) to evaluate the governance approach and compare outcomes in a structured way.
  • A live system is running on top of the framework, with additional resources shared for the validation framework and the public demo.
  • The author asks for feedback specifically from experts in observability, event sourcing, and AI tooling, indicating an emphasis on real-world integration and tooling feedback loops.

I’ve been working on a project called Janus — a governance layer for AI-assisted development systems.

The core idea is simple:

Instead of evaluating performance, we evaluate governance through evidence and protocol conformance.

We just published the second paper, which moves from the theoretical model to measurable governance:

  • Evidence-based model (E+/E−)

  • Omission detection

  • Human authority boundaries

  • Deterministic reconstruction

  • Benchmark (ECR, GVL, PVDR)

Paper 1 (model):

https://doi.org/10.5281/zenodo.18974356

Paper 2 (validation):

https://doi.org/10.5281/zenodo.19239183

There’s also a live system running on top of it:

https://lluviadeideas-juegosdidacticos.github.io/trivias/

And the framework used to run it:

https://framework.janusgovernance.org/

I’d really appreciate feedback — especially from people working on observability, event sourcing, or AI tooling.

広告