How I Use AI to Design Software First
Many developers want AI to jump straight into the code.
I don’t.
Before I ask any tool to implement something, I work the problem out in words first. I use AI as a design partner before I use it as a coding assistant. That shift has had a bigger impact on my results than any specific model or tool.
My toolset
My setup varies between home and work, but the pattern stays the same:
| Home | Work |
|---|---|
| ChatGPT Plus | Gemini Pro |
| GitHub Copilot | GitHub Copilot |
| Codex | N/A |
| Claude Code | Claude Code |
These tools are not interchangeable. I use them differently depending on the stage of work: discussion first, implementation second.
The real leverage comes before code
Most AI-assisted development starts too late.
People bring AI in once they are already in the code and expect it to figure everything out. But the hard part is usually earlier, when the idea is still unclear and the boundaries are not defined.
Before I ask for code, I want clarity on:
- the problem I am solving
- system boundaries
- consistent terminology
- what belongs in scope vs later
- assumptions that will cause drift if left implicit
If I solve that first, everything downstream improves.
My workflow
1. Start in chat
I begin in ChatGPT, usually using voice.
That lets me move quickly through ideas, constraints, edge cases, naming, and structure. At this stage I am not asking for code. I am shaping the system.
2. Turn it into documents
Once the idea stabilizes, I convert it into Markdown:
- vision
- vocabulary
- goals and non-goals
- system overview
- interfaces
- roadmap
The goal is not code generation. The goal is clarity.
3. Design before implementation
I do not rush into the repo.
I want the architecture, terminology, and constraints clear enough that implementation becomes execution, not invention. Better design leads to better prompts, which leads to better code.
Existing code still matters
This is not just for greenfield work.
If I have an existing codebase, I will often zip up the relevant parts and bring them into ChatGPT early. That gives me a baseline for discussion:
- what stays the same
- what changes
- where the boundaries are weak
It is not perfect for code navigation, but it is extremely effective for shaping changes before implementation.
Why this works
The main benefit is reduced drift.
Because the design is already defined, I do not have to re-explain everything in every prompt. I can give focused instructions:
Implement feature X using these constraints. Do not expand scope. Preserve terminology.
That is far more reliable than asking a model to infer everything from scratch.
How I think about the tools
I group tools into two categories:
- Discussion tools (ChatGPT, Gemini): explore ideas, refine design, produce artifacts
- Implementation tools (Copilot, Codex, Claude Code): execute against a defined design
Most frustration comes from using the wrong tool at the wrong stage.
What changes when I move to code
By the time I start implementation, the system is already defined.
That usually means:
- less drift
- fewer corrections
- better consistency
The model is no longer guessing what I want. It is executing against a plan.
The takeaway
My workflow is simple:
- Talk through the idea
- Turn it into design documents
- Bring in existing code when needed
- Use those artifacts as the source of truth
- Then implement
AI is not just a code generator.
Used well, it is a design amplifier.
And the clearer the design is up front, the better the code tends to be.





