I wrote a contract to stop AI from guessing when writing code

Reddit r/artificial / 2026/3/24

💬 オピニオンIdeas & Deep AnalysisTools & Practical Usage

要点

  • The author describes an “AI drift” problem where coding assistants fill in unspecified gaps, collapse solution paths too early, or produce superficially helpful but incorrect answers.
  • To address this, they created a simple interaction “contract” that constrains the AI with rules such as not inferring missing inputs, explicitly marking unknowns, separating facts from assumptions, and avoiding premature narrowing of possibilities.
  • The approach is presented as intentionally rigid and incomplete, but it reportedly improves outcomes for code writing, debugging, and system-design reasoning.
  • The contract is shared publicly via a GitHub repository for others to experiment with or critique, and the author invites discussion about similar mitigation strategies.

I’ve been experimenting with something while working with AI on technical problems.

The issue I kept running into was drift:

  • answers filling in gaps I didn’t specify
  • solutions collapsing too early
  • “helpful” responses that weren’t actually correct

So I wrote a small interaction contract to constrain the AI.

Nothing fancy — just rules like:

  • don’t infer missing inputs
  • explicitly mark unknowns
  • don’t collapse the solution space
  • separate facts from assumptions

It’s incomplete and a bit rigid, but it’s been surprisingly effective for:

  • writing code
  • debugging
  • thinking through system design

It basically turns the AI into something closer to a logic tool than a conversational one.

Sharing it in case anyone else wants to experiment with it or tear it apart:
https://github.com/Brian-Linden/lgf-ai-contract

If you’ve run into similar issues with AI drift, I’d be interested to hear how you’re handling it.

submitted by /u/Upstairs-Waltz-3611
[link] [comments]