AI Navigate

Designing OCP: a deterministic runtime/language built around observe match commit

Dev.to / 3/13/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • Introduces OCP, a deterministic runtime/language built around an observe → match → commit flow to explicitly govern side effects.
  • Motivates the approach by aiming to improve determinism, replayability, and governance of effects and state transitions.
  • Describes the three-phase core flow (Observe, Match, Commit) to make side effects explicit and boundary-limited.
  • Emphasizes that OCP is an experimental design, not a universal replacement, and invites serious testing of its constraints and trade-offs.

Designing OCP: a deterministic runtime/language built around observe → match → commit

Most programming systems treat side effects as something that happens throughout execution: read here, write there, mutate state, call out to the world, then rely on discipline, tooling, logs, and debugging to reconstruct what actually happened.

I wanted to explore a different model.

That is why I designed OCP: a language/runtime experiment built around an observe → match → commit execution flow, where side effects are meant to be explicit, constrained, and governed rather than ambient.

This is not a claim that OCP is “the future of programming,” and it is not being presented as a polished universal replacement for existing languages. It is a design-led technical experiment around a question I think is worth testing seriously:

What changes if determinism, replayability, and effect governance are treated as first-class runtime constraints from the beginning?

The problem I wanted to explore

A lot of software complexity comes from the gap between:

  • what the program appears to mean,
  • what the runtime actually does,
  • and what the outside world observes after side effects happen.

In conventional systems, side effects are often easy to perform but harder to reason about after the fact. Once reads, writes, external calls, and mutations are spread throughout execution, the burden shifts to debugging tools, tracing, logs, discipline, and post hoc reconstruction.

That works, but it also creates friction:

  • replay becomes harder,
  • auditability becomes weaker,
  • state transitions become less legible,
  • and answering “what exactly happened?” can become surprisingly expensive.

OCP is an attempt to push against that direction.

The core idea

At a high level, OCP is organized around a simple conceptual flow:

  1. Observe

    Gather the facts or inputs that are allowed to be seen.

  2. Match

    Evaluate structure, conditions, and possible transitions in a constrained way.

  3. Commit

    Make effects happen through an explicit, governed commit step.

The point is not just aesthetic structure. The point is to make side effects feel less like arbitrary ambient operations and more like explicit runtime events with a clearer boundary.

That design is intended to improve three things:

  • determinism
  • replayability
  • governance of effects and state transitions

In other words, I want execution to be easier to inspect, reason about, replay, and constrain.

Why this was worth building

I was not interested in creating “yet another syntax experiment.”

What interested me was the runtime model itself.

I wanted to see what happens if the system is shaped around questions like these from the start:

  • Can state transitions be made more legible?
  • Can effects be forced through narrower, more explicit gates?
  • Can replay, debugging, and audit become stronger properties of the model rather than add-on tooling?
  • Can a runtime be designed around controlled collapse from observation into committed world change?

That is the line of inquiry behind OCP.

What OCP is trying to be

OCP is trying to be a design-led programming model for explicit observation, constrained matching, and governed commitment of effects.

The emphasis is not on maximizing arbitrary freedom at every point in execution.

The emphasis is on making the runtime model more disciplined and structurally inspectable.

That means OCP is intentionally interested in questions like:

  • what the program is allowed to observe,
  • how possible transitions are selected,
  • when effects are permitted to become real,
  • and how those transitions can be replayed or audited later.

This is also why OCP is not best understood as “just a syntax layer.”

The design lives at the level of runtime semantics and execution structure.

What OCP is not

It is worth being explicit here.

OCP is not currently being presented as:

  • a finished production language,
  • a drop-in replacement for mainstream systems languages,
  • or proof that every problem should be forced into this model.

It is also not an attempt to win by slogans.

The useful question is not “is this revolutionary?”

The useful question is:

Does this model create real technical value, or does it mostly add conceptual ceremony?

That is the standard I think it should be judged by.

On authorship and AI-assisted implementation

I want to be direct about this.

OCP is a design-led, human-directed, AI-assisted project.

I am not presenting it as a hand-written solo compiler/runtime built line by line in isolation. My role is closer to this:

  • defining the original design intent,
  • shaping the model and constraints,
  • setting quality bars,
  • evaluating outputs,
  • rejecting weak implementations,
  • and steering the system toward coherence.

In other words, the intellectual ownership is in the design, structure, philosophy, validation criteria, and direction of the project, while AI is used as an implementation partner.

I think that distinction should be stated openly rather than hidden.

For some communities, AI-assisted implementation is disqualifying. I understand that. But the test I care about more is this:

Does the artifact hold up?

Are the model, documentation, examples, and runtime structure coherent enough to stand on their own?

Is the project legible enough to be criticized seriously?

Does it create technical value beyond its framing?

That is the pressure I want OCP to face.

Current state of the project

OCP already has a public GitHub repository, and the current work is focused on making the project concrete, legible, and testable as a real artifact rather than just an idea.

The priorities are straightforward:

  • a repository structure that outsiders can navigate,
  • documentation that explains the model clearly,
  • examples that show the execution shape,
  • and a presentation that makes the runtime constraints understandable.

I am less interested in pretending it is finished than in making sure it is concrete enough to be evaluated properly.

GitHub:

https://github.com/DucHaiten/OCP

The kind of feedback I actually want

I am not looking for generic encouragement.

The most useful criticism would be on questions like these:

  1. Is the core execution model understandable from the documentation and examples?
  2. Does observe → match → commit produce meaningful advantages, or mostly extra structure?
  3. Where does the model feel technically disciplined, and where does it feel over-constrained?
  4. What existing systems, languages, or runtime models should OCP be compared against more directly?
  5. If you were skeptical, which part would you attack first: semantics, ergonomics, implementation strategy, or use-case fit?

That is the level of pressure I want the project under.

Why I am sharing it publicly now

Because design ideas harden.

And once they harden too early, they become harder to challenge, harder to refine, and easier to defend for emotional reasons rather than technical ones.

I would rather put OCP in front of people while it can still be criticized at the level that matters:
the model, the runtime assumptions, the explicit constraints, and the claimed value.

If the model is weak, I want that exposed.

If it is promising but misframed, I want that exposed too.

If the implementation and presentation fail to communicate the real idea, that is worth learning early as well.

Closing

OCP is an attempt to explore a stricter runtime model centered on:

  • explicit observation,
  • constrained matching,
  • and governed commitment of effects.

Its value, if it has any, will not come from branding.

It will come from whether this structure leads to clearer execution, better replayability, stronger auditability, and more legible state transitions in practice.

If that sounds interesting, take a look at the repository and tell me where the model is strong, where it is weak, and where it is simply adding cost without enough return.

That is the conversation I want.

GitHub:

https://github.com/DucHaiten/OCP