For the first month I treated Copilot like a junior engineer with ambition and no guardrails.
It would write code I never asked for. I'd be halfway through a function, and Copilot would leap ahead ... creating new methods, suggesting whole classes, trying to take initiative like a junior who wants to prove themselves.
The breaking point came during a refactor. It would output code that didn't just ignore our style guide ... it created variables named R and T. It duplicated logic I'd already abstracted. It wrote the same pattern three times instead of recognizing the abstraction. The tests broke, and it didn't care. Because why would it? I hadn't taught it to care.
I spent two hours cleaning up a mess that should have taken twenty minutes to write correctly.
We teach engineers to take initiative. But with AI, you have to reel it in.
That became the pattern. I'd prompt, it would overreach, I'd prune. Prompt, overreach, prune. I was spending more time editing AI output than I would have spent writing it myself.
Not because the AI was bad. Because I was asking for answers when I should have been defining the system.
The Markdown File Realization
The shift happened when I stopped trying to re-prompt my way out of bad output. Instead, I changed what brain the AI pulled from.
I created markdown files with our engineering standards. How we write requirements. What questions we ask before we scope. The difference between a user story and a task. How we think about tradeoffs. When we prefer duplication over abstraction. What "clean" means in our codebase.
The next time an agent generated code, it didn't improvise. It executed patterns we'd already defined.
That was the difference. I wasn't prompting a junior engineer anymore. I was orchestrating a senior engineer.
What Senior Engineers Actually Do
Senior engineers don't write better code because they type faster. They write better code because they recognize patterns. They know when to abstract and when to duplicate. They understand the constraints that aren't written down but exist in the culture of the codebase.
You can't prompt that into an AI. You have to encode it.
We built it into skills, rules, agents, hooks. We have markdown files for our BA persona. For our solution architect. For how we handle code review and QA. When an agent generates a spec or writes code, it references these files. It's not improvising. It's executing patterns we've already defined.
The old constraint was headcount. "We don't have enough people." Now the constraint is algorithm quality. How well we've encoded judgment. How clearly we've defined what good looks like.
Why Architects Are Winning
Here's what I've noticed. Principal engineers and architects are adopting AI faster and deeper than mid-level developers. Their usage numbers are way higher.
It's not because they're more technical. It's because they already think in systems.
A mid-level engineer treats AI like a pair programmer. Prompt, review, accept, repeat. An architect treats AI like infrastructure. Define the patterns, encode the constraints, let the system execute.
If you can think in systems ... and you can exponentially scale that thinking ... why wouldn't you?
The Skill That Changed
I thought the cost would be craft. The satisfaction of writing a clean function. It wasn't.
The cost was learning a completely different skill. Not coding. Not even prompting. Orchestration.
Understanding how to chain agents. Where to insert human judgment. How to encode standards so rigid that the system can't violate them, but flexible enough that it can still surprise you.
The engineers struggling with AI aren't struggling with the tool. They're struggling because they spent years optimizing for a different skill ... writing code by hand ... and now the game has changed.
The Question
So here's what I watch for now.
When an engineer hits a wall with AI, do they re-prompt ... or do they re-architect? Do they try to steer the AI with better words, or do they change what the AI is pulling from?
The first approach scales linearly. One prompt, one output, one edit. The second scales exponentially. Define the system once, let it execute forever.
Most people use AI like Google because that's what the interface suggests. Type, accept, move on. It feels like progress.
But progress isn't speed of retrieval. It's leverage.
Google gives you answers.
A senior engineer gives you systems.
Which one are you running?
One email a week from The Builder's Leader. The frameworks, the blind spots, and the conversations most leaders avoid. Subscribe for free.




