A supervisor or "manager" Al agent is the wrong way to control Al

Reddit r/artificial / 3/22/2026

💬 OpinionIdeas & Deep AnalysisTools & Practical Usage

Key Points

  • The author argues that adding supervisor or manager AI layers to oversee other AI agents is an ineffective approach and likens it to wrapping oneself in a wet blanket that merely delays the inevitable.
  • The piece critiques using multiple AI judges to evaluate outputs, suggesting that this increases complexity and false positives without resolving core reliability issues.
  • It contends that relying on pure AI oversight is a "blind warship" approach and advocates for determinism through traditional software.
  • The author promotes hybrid solutions that combine AI with software as the only forward path to reliable, controllable systems.
  • Overall, the article argues for integrating AI with deterministic software rather than stacking AI governance layers to achieve safer, more predictable results.

I keep seeing more and more companies say that they're going to reduce hallucination and drift and mistakes made by Al by adding supervisor or manager Al on top of them that will review everything that those Al agents are doing.

that seems to be the way.

another thing I'm seeing is adding multiple Al judges to evaluate the output and those companies are running around touting their low percentage false positives or mistakes

adding additional Al agents on top of Al agents reduce mistakes is like wrapping yourself in a wet blanket and then adding more with blankets to keep you warm when you're freezing.

you will freeze, it will just take longer, and it's going to use a lot of blankets.

I don't understand. the blind warship of pure Al solutions. we have software that can achieve determinism. we know this.

hybrid solutions between Al and software is the only way forward

submitted by /u/ColdPlankton9273
[link] [comments]