A few days ago I built a tool called JoyConf: a real-time audience feedback system that lets speakers see emoji reactions floating up in the corner of their presentation while they're talking. It was a fun, simple idea and I was pretty excited about it.
I built it in Elixir and Phoenix LiveView, which was a deliberate choice. I mostly write Ruby these days, but this project felt like a good excuse to dig into Elixir and LiveView. Learn something new, build something useful. Two birds, one stone.
I drove the overall design and implementation planning, did active code review, and contributed everywhere I could. But for the Elixir and LiveView specifics, I leaned heavily on Claude. The syntax, architecture decisions, and debugging were Claude's domain, because I simply didn't know enough yet to own them. The tool worked and it seemed to work well. But when I got to the end and looked at the codebase, I realized I didn't really understand the parts Claude had built. I had reviewed the code as carefully as I could, but reviewing code in a language you don't know, implementing an unfamiliar architecture, only gets you so far. The understanding of those pieces had mostly stayed with Claude.
That's cognitive debt. And LLMs are very good at generating it.
What cognitive debt actually is
Cognitive debt accumulates when you defer the thinking that should happen now. It's different from technical debt, which is about the code itself (shortcuts taken, tests skipped, abstractions that didn't quite work out). Cognitive debt is about the reasoning that never happened. The mental model that never got built. The decision that got made without being understood.
Like financial debt, it doesn't feel like much at first. You're moving fast, things are working, you're shipping. The bill comes later, when you need to debug something you can't reason about, extend a system you don't understand, or explain a decision you never actually made. And to be clear, cognitive debt has been around long before LLMs, LLMs just magnify the problem.
LLMs make this disturbingly easy
LLM-generated code is mostly right. That's what makes it dangerous.
If the code were obviously wrong, you'd catch it. You'd dig in, figure out what went wrong, learn something in the process. But LLM output is usually plausible, often correct, and just coherent enough that it passes the vibe check. You run the tests. They pass. You move on. The mental model of how it works never gets built, because you never needed it... until you do.
There's a specific failure mode worth naming here. Using an LLM to move faster on things you understand is leverage. Using it to skip understanding altogether is debt. Those feel identical in the short term. Both result in code getting written. One leaves you with understanding you can build on; the other leaves you with output you're stuck with.
And it catches everyone. Junior developers accept LLM output because they don't know enough to question it. Senior developers accept it because they had a hundred PRs today and the code looks fine, so they assume it is fine. Both skip the reasoning step. The result is a codebase full of decisions nobody on the team can actually defend.
Back to JoyConf
When I realized I'd built something I didn't fully understand, I asked Claude to write me an explainer document. Not a summary, but an actual explanation of the architecture, the key concepts, why certain decisions were made, how the pieces fit together. Something I could read, learn from, and come back to later.
It wasn't a magic pill. I was starting from near zero with Elixir and LiveView, so one document didn't make me an expert. But it meaningfully closed the gap. I understood the code better than I did before. I had something to refer back to. And I started to feel like the codebase was actually mine.
That experience shaped how I think about using LLMs for coding. The tool works fine. How you engage with it makes all the difference.
Practical ways to keep the debt in check
Ask for explanations before you accept the code. Don't just run it. Ask the LLM to walk you through what it did and why. This takes an extra minute and catches a surprising number of cases where the code is technically correct but built on assumptions you don't share.
Ask for an explainer document for bigger decisions. Architecture choices, non-obvious patterns, anything you're going to need to live with for a while: ask the LLM to write it up in plain language. Keep it in the repo. Future you will thank present you.
Use Simon Willison's "showboat" approach to document what was built. The showboat tool "creates executable demo documents that show and prove an agent's work." (kind of like a Jupyter notebook, but just markdown). The LLM walks through its output with explanation and context. It's a great way to produce living documentation that captures not just what the code does, but why it was written that way. It's a great tool, but not suitable for every use case.
Read the LLM's thinking, especially when debugging. Many LLMs can expose their reasoning process. When you're stuck on a bug or trying to understand a decision, asking the LLM to think out loud before answering is one of the fastest ways to build genuine understanding rather than just getting an answer.
Write the tests yourself. Even if you let the LLM write the implementation, writing the tests forces you to reason about the behavior you actually want. It's one of the best ways to make sure the mental model gets built. Of course, it takes more time and it's not always possible, like with JoyConf where I didn't know enough about the Elixir environment to write effective tests. But when you can, it's a great way to stay in the driver's seat.
Slow down at decision points. LLMs are fast. That's the point. But speed can accelerate debt. When you hit a fork in the road (an architectural choice, a tradeoff, a "there are a few ways to do this" moment) pause and do the reasoning yourself, even if you use the LLM to help you think it through.
The goal isn't to use LLMs less
LLMs are genuinely useful and I don't plan to stop using them. The goal is to stay in the driver's seat mentally, using them for leverage rather than as a substitute for thinking.
A healthy LLM workflow and a debt-generating one can look identical from the outside. The difference shows up later, when you need to understand, maintain, or extend what you built. If you finish each session understanding what you built and why, you're using the tool well. If you don't, you're taking out a loan.
And like financial debt, cognitive debt is a lot easier to avoid than to pay off.




