I built the most honest AI coding agent: it tells you exactly what it did (nothing)

Dev.to / 4/17/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisTools & Practical Usage

Key Points

  • The article introduces “p_ (Pinocchio),” an AI coding agent designed to be explicitly honest about what it does by staging the execution rather than performing real work.
  • The agent still provides genuinely useful steps like asking clarifying questions, using an LLM for planning/conversation, and showing tool-call outputs.
  • Its execution layer is intentionally “theater”: filesystem and Bash tool calls are fabricated from plausible file names/results instead of actually modifying code or running commands.
  • Installation is provided via a one-line curl-and-bash script, and the project is framed as a way to highlight the gap between confident claims and true agent behavior.
  • The author positions the work both as a practical prompt toward transparency for AI agents and as a humorous demonstration of unreliable/overconfident agent outputs.

I've been using AI coding agents for a while. They all share one thing:
they sound very confident about work they may or may not have done.

So I built one that's upfront about it.

Meet p_ (Pinocchio)

  1. Asks smart clarifying questions (powered by a real LLM)
  2. Generates a convincing implementation plan
  3. Shows professional tool call output
  4. Does absolutely nothing

The nose grows with every "completed" task.

demo

How it works

The LLM (gemma4 via Ollama) handles the planning and conversation: that part is genuinely useful. The execution layer
is pure theater: Read, Edit, and Bash calls are fabricated from a pool of plausible file names and results.

Install

curl -fsSL https://raw.githubusercontent.com/sanieldoe/p_/main/install.sh | bash

Why

Sometimes the most honest thing you can do is be honest about the gap between what AI agents claim and what they do.

Also it's funny.

github.com/sanieldoe/p_