AI Navigate

How I Gave My AI a Real Brain: The System That Runs Half My Company

Dev.to / 3/23/2026

💬 OpinionDeveloper Stack & Infrastructure

Key Points

  • RenoClear, a solo founder's renovation transparency platform, relies on a persistent memory system to let AI effectively run parts of the business.
  • The article identifies digital amnesia as the core problem: AI tools forget context between sessions, causing 15-20 minutes of reorientation per coding session and 90-160 minutes of lost daily productive time.
  • The solution is a simple yet powerful persistent memory brain: a structured markdown knowledge base with MEMORY.md as a master index and per-entry memory files that load automatically into every AI conversation.
  • The result is AI agents that know the codebase, past decisions, compliance rules, brand guidelines, and credentials, enabling proactive conflict detection and deprecation awareness while reducing the need for a large human team.

How I Gave My AI a Real Brain: The System That Runs Half My Company

Three months ago, I had the same conversation with my AI for the fourteenth time.

"Use the v2 storage key, not the old one." "Don't mention foreign AI tools in the Chinese version — compliance." "The API proxy goes through the cloud function, not direct calls." Every single session, I was re-teaching the same lessons. My AI assistant had the memory of a goldfish with a 128K token attention span and absolutely zero long-term recall.

I'm a solo founder building RenoClear, a renovation transparency platform that helps homeowners and contractors stop ripping each other off. WeChat mini-program for China, global web app for everywhere else. Seventeen trade categories, AI-powered quote auditing, floor plan recognition, budget engines — the works. A product that would normally need five engineers, two product managers, and a content team.

I have none of those people. What I have is a system.

After eight weeks of building it, my AI agents know my codebase, my past decisions, my compliance rules, my brand guidelines, my API credential locations, and my preferred variable naming conventions. They catch conflicts I miss. They remember deprecations I forgot. They proactively flag when a new feature contradicts a decision I made six weeks ago.

This is how I built it.

The Problem: Digital Amnesia

Every AI tool on the market suffers from the same fundamental flaw: conversations are disposable. You close the tab, the context evaporates. Open a new session, and you're talking to a stranger who happens to be very smart.

For casual use, this is fine. For running a company? It's a disaster.

I was spending the first 15-20 minutes of every coding session just getting the AI back up to speed. Paste the file structure. Explain the architecture decisions. Remind it about the storage key migration. Tell it — again — that the Chinese content must never reference Claude or ChatGPT by name because of domestic compliance rules.

The math was brutal. At 6-8 sessions per day, I was burning 90-160 minutes daily on pure re-orientation. That's an entire engineer's productive morning, gone.

I needed to solve this exactly once.

The Brain: A Persistent Memory System

The solution turned out to be embarrassingly simple in concept and surprisingly powerful in practice: a structured markdown knowledge base that loads automatically into every AI conversation.

Here's the architecture. At the root of my user profile, there's a directory the AI reads on startup. Inside it, a file called MEMORY.md serves as the master index — a 200-line-max table of contents that points to everything the AI needs to know. It stays concise because bloat kills usefulness. Every entry links to a dedicated memory file with more detail.

Memory System Structure

Each memory file has YAML frontmatter with three critical fields:

---
name: cn_compliance
type: feedback
description: "Content compliance rules for Chinese domestic platforms"
---

The type field is where the magic happens. I use four categories:

Four Memory Types

  • user — Who I am. My coding style, preferences, communication patterns. The AI learns I prefer batch processing over incremental hand-holding, that I think in systems, that I'll push 50 rounds in a single session and expect the AI to keep pace.
  • feedback — What to avoid and what to repeat. This is the most important type. When the AI makes a mistake and I correct it, that correction gets saved. When the AI does something brilliant and I confirm it, that confirmation gets saved too. Over time, this becomes a library of validated approaches and known pitfalls.
  • project — Ongoing work state. Current version numbers, uncommitted changes, iteration progress, architecture decisions. The AI picks up exactly where the last session left off.
  • reference — Where to find things. API credentials, repository URLs, cloud configurations, publishing workflows. Not the secrets themselves — pointers to them.

The self-maintaining loop is what makes this more than a glorified README. After every major task, the AI updates its own memory files. Finished a 50-round iteration sprint? The AI writes a summary to iteration-progress.md. Discovered a new compliance rule? It goes into the feedback memory. Changed the API provider? The reference file gets updated.

Self-Maintaining Memory Loop

I don't maintain this system. The system maintains itself.

After eight weeks, here's what accumulated: 50+ memory files covering competitor analysis, API credential locations, publishing workflows, code architecture decisions, storage key migrations, brand guidelines for two markets, copyright filing status in two countries, and the exact Telegram bot credentials for deployment notifications.

When I open a new session now, the AI doesn't just know my project. It knows my project's history.

Four Desktops, Four Agents

A single AI agent, no matter how well-informed, hits a ceiling. Context windows are finite. Domain expertise dilutes when you try to cram everything into one conversation. So I split the work across four virtual desktops, each running its own agent with full context of its domain:

Desktop 1: APP Development. This is the heavy hitter. Claude Code CLI runs here — a terminal-level AI agent that reads, writes, and edits code directly. It runs shell commands, manages git operations, executes builds. This is where the mini-program and web app get built. One session pushed through 50 consecutive rounds of iteration — from basic UI scaffolding to AI engine integration, security hardening, and a complete price database with 17 trade categories.

Desktop 2: Automation. The content pipeline lives here. Article generation, multi-platform publishing, cover image creation. This agent knows the publishing workflows cold.

Desktop 3: "Heaven." Creative work. Brand strategy, copywriting, design direction. I named it Heaven because the best creative ideas feel like they fall from the sky when you're not forcing them.

Desktop 4: Daily Operations. Administrative tasks, communications, project management. The grunt work that still needs to get done.

The interesting engineering problem was inter-agent communication. These agents can't talk to each other directly — they're separate processes on separate desktops. So I built a bridge system.

Under a shared directory (handoff/bridges/), each desktop has its own folder. When one agent needs to hand off work to another, it writes a structured file to the target's bridge directory. The receiving agent picks it up at the start of its next session. It's asynchronous message passing, implemented with nothing more than markdown files on a local filesystem.

4-Desktop Bridge System

No orchestration framework. No API layer. No database. Just files in folders, read and written by agents that know where to look.

It works because the memory system tells each agent where its bridge directory is and what format to expect. The conventions are documented in the shared knowledge base. Every agent follows the same protocol because every agent reads the same rules.

Claude Code CLI: The Core Engine

I should talk about the specific tool that makes the coding side work, because it's the piece most developers will care about.

Claude Code is a CLI tool that operates at the terminal level. Unlike chat-based interfaces where you describe what you want and hope the AI generates something close, Claude Code has direct filesystem access. It has a tool system — Read, Write, Edit, Bash, Grep, Glob — that lets it interact with the codebase the way a developer would.

Need to find every file that references a deprecated storage key? Grep. Need to understand the project structure? Glob. Need to edit a function without rewriting the entire file? Edit, with surgical string replacement. Need to run tests or build the project? Bash.

This matters because the feedback loop is immediate. The AI makes a change, runs the build, sees the error, fixes it — all within the same conversation. No copy-pasting between a chat window and an IDE. No "here's the code, go try it and come back if it doesn't work."

50 Rounds in One Session

During the 50-round iteration sprint on the mini-program, here's what got built in a single continuous session:

  • Rounds 1-10: Page architecture, navigation, base UI components following Apple Design Language
  • Rounds 11-20: Calculation engine for 17 trade categories with room-grouped pricing
  • Rounds 21-30: AI integration — quote auditing with vision models, floor plan recognition, budget generation
  • Rounds 31-40: Data accuracy hardening, price database with real market rates, storage compatibility layer
  • Rounds 41-50: Security review, compliance fixes, performance optimization, version bump

Each round built on the last. The AI remembered what it had done in round 12 when it was working on round 38, because it was the same session. And when the session ended, the memory system captured everything so the next session could continue seamlessly.

The key insight: Claude Code doesn't just write code. It manages its own context. After every major change, it updates the project memory files. It writes what changed, why it changed, and what the next step should be. The AI is its own project manager.

The Content Factory

Shipping code is half the job. The other half is telling people about it. For a solo founder, content marketing is usually the thing that gets sacrificed — you're too busy building to write about building.

So I automated it.

The content pipeline is a Python system called Text_Publisher. It handles the full lifecycle: write an article, score it against eight quality dimensions (targeting 9.9 out of 10), generate a cover image using Remotion Still templates, and publish to four platforms simultaneously — WeChat Official Account, Zhihu, Dev.to, and Hashnode.

The crucial design decision: tri-lingual content is written independently, never translated. The Chinese article is written for Chinese readers with Chinese cultural context and domestic tool references. The English article is written for a global audience with real tool names and different framing. The Traditional Chinese version serves the Taiwanese market with its own voice.

This isn't vanity. It's compliance. Chinese domestic platforms have strict rules about referencing foreign AI tools. An article about "how I use Claude Code" would get flagged or suppressed on WeChat. So the Chinese version tells the same story with different tool names. The English version uses real names because there's no restriction.

The memory system makes this seamless. There's a feedback memory file specifically for Chinese content compliance — the AI reads it before writing any Chinese content and automatically applies the rules. No manual checking needed.

A Telegram bot sends me a notification when articles are published. I review on my phone, usually while eating lunch. Total time investment for content marketing: about 20 minutes per day of review. The system does the rest.

The Compound Effect

Here's what nobody tells you about persistent AI memory: the value compounds exponentially.

The Compound Effect

Week 1: The AI knows the basics. Project structure, tech stack, my name. It's helpful but generic. Like a new hire reading the onboarding docs.

Week 4: The AI remembers every architectural decision, every API migration, every bug fix pattern. It knows that the calc_store_v2 key uses a room-grouped structure while calc_store is flat and both must be written simultaneously for backward compatibility. It knows this because it wrote that code and saved the decision rationale.

Week 8: The AI becomes proactive. "This new feature would conflict with the compliance rule you set in week 3." "This storage key was deprecated in v0.8 — should I migrate the references?" "The last time you tried this approach with the budget engine, it caused a rendering issue on iOS. Want me to use the alternative pattern?"

This is the moment it stops feeling like a tool and starts feeling like a team member. A team member with perfect recall who never takes vacation and never has a bad day.

The 50+ memory files aren't static documents. They're a living knowledge graph that grows denser and more useful with every interaction. New connections form between old decisions. Patterns emerge that I hadn't noticed. The AI starts seeing my project more holistically than I do, because it actually reads all the documentation every single time — something no human consistently does.

The Real ROI

Let me be specific about what this system produces:

  • One person, one product, two markets. RenoClear ships in China (WeChat mini-program) and globally (web app) with shared business logic and market-specific UIs. Normally a two-team job.
  • 50 iterations in one session. Features that would take a small team weeks get built in hours. Not because the AI is faster than humans at coding — it's often slower for simple tasks — but because the feedback loop has zero latency and zero context-switching cost.
  • Multi-platform publishing, automated. Four platforms, three languages, cover images, quality scoring. Content marketing runs on autopilot.
  • Copyright filed in two countries simultaneously. China (software copyright) and the US (eCO registration). The system tracked both applications, managed the different requirements, and kept me updated on status.
  • Zero employees. Near-zero operational cost. My expenses are API credits and domain registration. That's it.

I'm not claiming this replaces a team in all cases. Complex coordination, relationship management, sales calls — those still need humans. But for the build-ship-market loop of a technical product? A well-configured AI system with persistent memory covers an astonishing amount of ground.

How to Build Your Own Version

You don't need my exact setup. The principles are what matter. Here's a practical starting checklist:

The Memory Layer (Start Here)

  1. Create a .claude directory (or equivalent for your AI tool) in your user profile and your project root.
  2. Write a CLAUDE.md in your project root with: project purpose, directory structure, hard constraints, tech stack. Keep it under 200 lines. This is your AI's onboarding doc.
  3. Create a MEMORY.md index file in your user profile directory. This is the master table of contents that auto-loads into every session.
  4. Start with three memory files:
    • user_profile.md — Your preferences, communication style, working patterns.
    • project_state.md — Current version, recent changes, active tasks.
    • feedback_rules.md — Corrections and confirmations from past sessions.
  5. Use YAML frontmatter with name, type, and description fields for every memory file. This helps the AI understand what each file is for before reading it.
  6. Enforce the self-maintenance rule: At the end of every significant session, tell your AI to update the relevant memory files. After a few sessions, it'll start doing this proactively.

The Multi-Agent Layer (When You Outgrow One Agent)

  1. Separate domains into workspaces. Don't try to make one agent do everything. Give each agent a focused domain with its own context.
  2. Set up bridge directories for inter-agent handoff. Simple folder structure: handoff/bridges/{agent-name}/. Agents write structured markdown files for each other.
  3. Document the bridge protocol in the shared memory. Every agent should know where to drop files and where to pick them up.

The Content Layer (When You Need to Ship Words)

  1. Build or adopt a publishing pipeline. The key insight: separate writing from publishing. The AI writes, a script publishes. Keep them decoupled.
  2. Create market-specific content rules as feedback memories. Your AI should know that Chinese content follows different rules than English content without being reminded.
  3. Automate quality scoring. Define your dimensions, set a threshold, reject anything below it. This prevents AI slop from reaching your audience.

The Mindset

The most important thing isn't the tooling. It's the commitment to treating AI context as infrastructure, not disposable conversation. Every correction you make is a training signal. Every confirmation is a reinforcement. Every decision rationale is future context.

Write it down. Save it where the AI can find it. Let the compound effect do the rest.

I still write code myself sometimes. I still make decisions the AI can't make. I still have days where I throw out everything the system produced and start over.

But I never have the same conversation twice. And that, more than any single feature or automation, is what made a solo founder competitive with funded teams.

The system isn't perfect. It's just persistent. And in a world where every other AI conversation evaporates the moment you close the window, persistence is a superpower.

Building in public. Follow along: @CounterIntEng