# Anti-Vibe-Coding: 17 Skills That Replace Ad-Hoc AI Prompting

Dev.to / 2026/4/13

💬 オピニオンIdeas & Deep AnalysisTools & Practical Usage

要点

  • The article argues that “vibe coding” stems less from AI capability and more from a lack of a disciplined human-AI process that defines what “done” means.
  • It introduces the free, MIT-licensed Claude Code plugin “8-habit-ai-dev,” designed to enforce a 7-step development workflow aligned to Stephen Covey’s 8 Habits.
  • The proposed workflow includes checkpoints for investigating requirements, defining end-state, deciding architecture, breaking work into atomic tasks, reviewing AI output before commit, and deploying with staging/rollback readiness.
  • The plugin reportedly includes 472 automated assertions and DAG-validated skill chains with zero dependencies to reduce unstructured prompt-by-prompt coding.
  • The author recommends starting with two commands—/requirements (define done) and /review-ai (audit before commit)—to address most common vibe-coding failures.

TL;DR

I built a Claude Code plugin with 17 skills that enforce a 7-step development workflow grounded in Stephen Covey's 8 Habits. It has 472 automated assertions, DAG-validated skill chains, and zero dependencies. It's called 8-habit-ai-dev and it's free (MIT).

The Problem: Vibe Coding

You know the pattern. Open Claude Code. Type "build me a login page." Get something back. It looks right. Ship it.

Three days later: no input validation, no rate limiting, session tokens stored in localStorage, no tests, no rollback plan. The AI did exactly what you asked — and that was the problem. You never defined what "done" looks like.

This is Vibe Coding — writing code by vibes rather than discipline. AI makes it faster, but "faster" without direction is just faster in the wrong direction.

The symptoms:

  • Requirements live in your head, not in a document
  • "Review" means skimming the diff for 10 seconds
  • Tests get written after the PR (or never)
  • Architecture decisions happen accidentally mid-implementation
  • The same mistakes repeat across sessions because no one captures lessons

The Hypothesis

What if the problem isn't the AI — it's the process around the AI?

What if, instead of making Claude smarter, we made the human-AI collaboration more disciplined?

That's the premise behind 8-habit-ai-dev: a Claude Code plugin that enforces a 7-step workflow before, during, and after coding. Each step maps to one of Stephen Covey's 8 Habits — not as philosophy, but as practical checkpoints.

The 7-Step Workflow

/research      → Investigate before specifying (H5: Understand First)
/requirements  → Define done before starting (H2: Begin with End in Mind)
/design        → Human decides architecture (H8: Find Your Voice)
/breakdown     → Atomic tasks, no scope creep (H3: First Things First)
/build-brief   → Read code before writing (H5: Understand First)
/review-ai     → Audit before commit (H4: Think Win-Win)
/deploy-guide  → Staging first, rollback ready (H1: Be Proactive)

You don't need all 7 steps every time. Start with two:

  1. /requirements before building — define what "done" looks like
  2. /review-ai before committing — audit what the AI actually produced

Those two alone eliminate most Vibe Coding problems.

Show, Don't Tell: Three Skills in Action

/requirements — Define Done Before Starting

Instead of "build a login page," you get:

## PRD: User Authentication
**What**: Email/password login with session management
**Why**: Users need secure access to dashboard
**Who**: End users (B2C), 10K expected monthly
**Success Criteria**:
  - [ ] Login with email/password returns JWT
  - [ ] Invalid credentials return 401 (not 500)
  - [ ] Rate limit: 5 attempts per minute per IP
  - [ ] Session expires after 24h of inactivity
**Out of Scope**: OAuth, 2FA (Phase 2)

Now Claude has something concrete to build against. And you have something concrete to review against.

/review-ai — Audit What AI Actually Produced

After implementation, /review-ai audits across 4 axes:

Axis What It Catches
Security SQL injection, XSS, hardcoded secrets, missing auth
Quality Dead code, naming inconsistencies, >800-line files
Completeness Missing error handling, untested paths, TODOs left behind
Performance N+1 queries, unbounded loops, missing pagination

Every finding cites file:line — no vague "consider improving error handling." Instead: src/auth.ts:42 — password compared with == instead of timing-safe comparison.

/reflect — Capture Lessons, Don't Repeat Mistakes

After each task, 6 questions in 5 minutes:

  1. What went well?
  2. What surprised me?
  3. What would I do differently?
  4. What reusable pattern did I discover?
  5. One specific action item with an owner and deadline
  6. Which skill was most/least useful?

The lesson gets saved to ~/.claude/lessons/. Future /research and /build-brief sessions automatically search these lessons before starting work. The learning loop closes.

When lessons accumulate past 10 files, /reflect consolidate runs a 4-phase consolidation cycle — inspired by Claude Code's own auto-dream memory system — to merge duplicates and prune stale entries.

What Makes This Different

It's a Methodology, Not a Tool Collection

Most Claude Code plugins give you more tools — more agents, more MCP integrations, more slash commands. This plugin gives you more discipline. The 17 skills form a validated chain:

research → requirements → design → breakdown → build-brief → review-ai → deploy-guide → monitor-setup

Each skill declares what it expects from its predecessor and what it produces for its successor. A DAG validator (57 assertions) ensures no broken edges.

It Adapts to Your Level

Run /calibrate once. Answer 5-7 questions about your development maturity. The plugin writes a profile to ~/.claude/habit-profile.md. From then on, every skill adapts:

Level Behavior
Dependence Full guidance — every step explained
Independence Key checkpoints only — you know the basics
Interdependence Delegation + review patterns — multi-agent workflows

No per-skill configuration. The session hook reads your profile and emits one directive that shapes all 17 skills.

472 Automated Assertions

This is a markdown-only plugin. No TypeScript, no npm, no runtime. But it has more tests than most production applications:

Validator Assertions What It Checks
validate-structure.sh 238 Frontmatter, naming, sections, version sync, file size, tools, links
test-skill-graph.sh 57 DAG edges, symmetry, cycles, orphans, chain anchors
validate-content.sh 177 Docs freshness, fitness functions, convention consistency

Why test markdown? Because skills are instructions to an AI. A broken handoff chain, a missing "When to Skip" section, or a stale version reference produces wrong behavior — just like a bug in code.

The Whole Person Model

/whole-person-check scores any feature across 4 dimensions:

Dimension What It Measures AI Blind Spot?
Body (Discipline) CI, tests, monitoring AI does this well
Mind (Vision) Architecture, ADRs, roadmap AI does this well
Heart (Passion) Error message empathy, DX, craft AI neglects this
Spirit (Conscience) Security ethics, privacy, "should we build this?" AI neglects this

If Heart or Spirit lag Body/Mind by 2+ points, it flags the imbalance. This catches the failure mode where AI-generated code is technically correct but lacks craft quality and ethical consideration.

Architecture: Production Patterns Inside

The v2.8.0 release adapted patterns from Anthropic's own Claude Code internals (via the "Claude Code from Source" architectural analysis):

  • Context compression awareness (/build-brief): Structure briefs so critical info survives Claude's 4-layer context compression pipeline
  • Sticky latch principle (/design): Classify decisions by rework cost — "Sticky" decisions (>50% rework) require a new design cycle to change
  • Fork agent pattern (/breakdown): Design parallel tasks to share prompt prefix for ~90% token savings
  • Dream-inspired consolidation (/reflect): 4-phase lesson consolidation modeled after Claude Code's auto-dream memory system

Install in 2 Commands

claude plugin marketplace add pitimon/8-habit-ai-dev
claude plugin install 8-habit-ai-dev@pitimon-8-habit-ai-dev

Then start with the minimum viable discipline:

/requirements    # Before building anything
/review-ai       # Before committing anything

That's it. Two skills. Biggest impact.

Companion Plugins

8-habit-ai-dev focuses on workflow discipline — how to develop well. For maximum coverage, combine with:

  • claude-governance: Compliance enforcement — pre-commit secret scanning (25 patterns), Three Loops decision model, OWASP DSGAI mapping
  • superpowers (official): Process skills — brainstorming, debugging, TDD, parallel agents

All three compose cleanly — no conflicts, no overlap.

The Numbers

Metric Value
Skills 17 (hand-crafted, DAG-validated)
Agents 2 (8-habit-reviewer, research-verifier)
ADRs 8 architecture decisions documented
Wiki 21 pages with bidirectional skill links
Tests 472 automated assertions
Releases 19 (v1.0 → v2.8.0 in 23 days)
Dependencies 0 (pure markdown + bash)
License MIT

Why Covey?

Stephen Covey's 7 Habits of Highly Effective People (plus The 8th Habit) isn't a software methodology — it's a framework for effectiveness under uncertainty. AI-assisted development is exactly that environment: powerful tools, unclear requirements, constant context-switching, easy shortcuts that create long-term debt.

The mapping isn't forced:

  • H1 (Be Proactive): Don't react to bugs — prevent them. Staging first. Rollback ready.
  • H2 (Begin with End in Mind): Define success criteria before coding.
  • H3 (First Things First): Do what's important, not what's interesting. No gold-plating.
  • H4 (Think Win-Win): Reviews that help, not just judge. Error messages that empower.
  • H5 (Seek First to Understand): Read code before writing code. Research before specifying.
  • H6 (Synergize): Parallel agents > sequential prompts. Third alternatives.
  • H7 (Sharpen the Saw): Reflect after every task. Capture lessons. Consolidate.
  • H8 (Find Your Voice): Understand WHY, not just WHAT. The Whole Person Model.

"ทำเสร็จ ≠ ทำดี" — Done is not Done Well.

If your AI writes code fast but without discipline, you're just creating tech debt faster. This plugin is the discipline layer.

GitHub | MIT License

Built with Claude Code. Tested with 472 assertions. Shipped with discipline.