AI時代のコーディング教育を再考する

Dev.to / 2026/4/21

💬 オピニオンSignals & Early TrendsIdeas & Deep Analysis

要点

  • この記事は、現状のAI支援によるコーディング教育が「使うか使わないか」の二択に近くなっているため、実際の学習から“擬似学習”へ滑りやすいと主張しています。
  • 著者は、コーディングエージェントへの過度な依存が短期的な伸びはある一方で長期的な定着を損ねたという自身の経験を述べ、試験前夜にスライドを一気に見て理解した気になる状況にたとえています。
  • 解決策として、「ドキュメントを読むこと」「フレームワークやサービス提供者のコードスニペット」「軽めのチュートリアル」へ戻す“微妙な中間地帯”を提案しつつ、システムプロンプトの調整だけでは恒久的な解決にならない点を指摘しています。
  • 主要な提言は、コーディングの専門家と学習者に対する扱いを分けるべきだというもので、学習とタスク実行は、たとえAIの基盤ネットワークが同じでも関与する脳内の仕組みが根本的に異なると述べています。
  • 本稿は、Windsurf、Replit、Vercel v0などのツールとの数か月のやり取りに触発され、コーディング教育におけるAIの役割を見直す考察として位置づけられています。

Today marks the 8-month anniversary of my tryst with Windsurf, Replit, and Vercel v0. I thought it fitting to commemorate this ill-fated romance with some thoughts on AI's place in the coding education landscape, as well as a learning aid I've been ideating on for some time.

· · ·

Some Background

When I first started coding with freeCodeCamp and The Odin Project over a year ago, I was blissfully unaware of coding agents and how sorely deficient my skills were in comparison to theirs. And I say 'blissfully unaware' not as a turn of phrase, but as a literal nod to how much simpler life was when it was just me, an editor, and some lesson notes.

Because there's nothing more demoralizing than the realization that with one prompt, a fancy algorithm can spit out entire features that took you weeks building.

They say that comparison is the thief of joy, and in this case I didn't scoff.

· · ·

The Problem

The issue with AI in the coding education landscape today is that it's a step function - you either use it or you don't. There's virtually no middle-ground except say, system prompts or context injection in an attempt to 'nerf' the model.

If you choose to use it, then you tread a very fine line between learning and pseudo-learning. In my experience, the period in which I leaned heavily on coding agents, coincided with the period in which I learned the most but retained the least.

It's like not going to classes all semester, then flipping through the slides the night before the exam. You think you got it, but you really don't.

If you choose not to use it, then you come off as being 'resistant' or 'old-school', and you do miss out on some objective benefits - like having a personal Stack Overflow on steroids and without the snobbish attitude.

· · ·

A Tenuous Middle-Ground

The solution?

I think everyone's handling it differently, but personally, I've gone back to reading documentation, code snippets from framework or service providers, and light tutorials.

This revised approach has clearly been more fruitful. But trying to maintain this approach with existing AI tools has been genuinely frustrating. Tweaking system prompts or injecting instructions inline to tone down the solutions, just doesn't seem like a permanent fix.

But at the very least, it provides a cardinal direction.

In my mind at least, the solution is to bifurcate our treatment of coding professionals and those who are learning to code. After all, knowledge acquisition lights up a fundamentally different part of our brain than task execution, even if the underlying networks are the same.

· · ·

The Systemic Issue

What I realized is that most startups and major AI companies are chasing professional developers. Their target customer is the enterprise team, not people who are learning to code. This is understandable given the revenue dynamics and longer term AI game-plan.

But I'm betting on the fact that learning to code will still be important despite how the role of coders will change in the age of AI.

To that end, perhaps there's a gap that's being left unfilled.

Namely, "AI Constraint".

· · ·

The AI Constraint Layer

My understanding is that the "AI Constraint" layer has 3 main levels:

  • Level 1: the model itself, i.e. training data, weights, etc
  • Level 2: the harness, i.e. the intermediary steps a user prompt flows through before being returned as a fully-formed response
  • Level 3: system prompts and context injection

Currently, individuals are only able to modify the response at Level 3, via system prompts or one-off context injection.

But it's somewhat annoying to have to save an instruction somewhere and paste it into every LLM you interact with. Doubly so when there are dozens of models from tens of providers over many platforms that you interact with regularly. This is as true for assistants in code editors or CLI as it is for general-purpose LLMs in browser chat windows.

This does not feel native. Because that's not the problem these solutions were built to solve.

In addition, it's incredibly inefficient.

Manually "nerfing" a response for an advanced AI assistant that can spin up hundreds of lines of code so that it can answer a student's questions and provide only a few lines of rudimentary code, is likely using the wrong tool for the job.

· · ·

A Better Solution

What I would like to see is something designed from the outset to help people learn how to code. NOT to make professional programmers more productive.

Those objectives are similar only in that they both involve code.

Fundamentally different problems need fundamentally different solutions.

· · ·

Product Vision

Level 3 doesn't interest me. It exists, it kind of works, and it doesn't move the needle.

Level 1 and Level 2 on the other hand, are positively titillating.

For Level 1, I'd like to see fine-tuned models that are primed to be teachers instead of practitioners. They don't need system prompts or context injections that warn them not to give the student the answer. Those models can be standalone - i.e. listed for free access on the AWS BedRock Marketplace - or incorporated into an IDE as a model provider.

Now to Level 2. I don't see the core IDE infrastructure changing too much. The way that a user prompt in a chat window gets delivered to an agent or LLM, then passes through the system before getting returned, remains largely the same. The ability for LLMs to be aware of a user's project files, and be able to connect to MCP servers also remain unchanged.

The critical departure from current IDEs however, is the "Interface/UI" aspect. That's also the critical piece to consider when deciding between Monaco, Code-OSS, or Theia-IDE as the base infrastructure.

· · ·

Features

What I'm envisioning is that in addition to a sidebar to find files, a main editor window, and a side-panel AI Assistant, there will be an "always-on" AI agent that constantly monitors the user's cursor position, inputs, and text highlighting.

The objective is to create a hyper-aware AI harness that predicts the user's intentions and knowledge gaps.

For example, if a user creates a new file app/api/query/route.ts, then the action of clicking into the blank document and pausing for 5-10 seconds, should trigger an inline popup with 4-5 choices for the user to select.

These choices could look like "Are you stuck on…", "Would you like to know more about…", "Is X what you are trying to accomplish", etc. Basically a 'Hover Tooltip', but users can click on these suggestions to trigger an automatic prompt returning an answer in the AI assistant side-panel.

I also want to nerf inline suggestions so that instead of spitting out the entire structure of a function, a user-customizable delay will trigger only the next keyword or symbol needed to complete a piece of code, and not the entire line or function.

I think there will be many more features and micro-features, but I think that's enough to get started with. To enable even these simple feature additions to traditional IDE's, I think will require significant refactoring of any open-source IDE codebase.

· · ·

Remaining Decisions

What remains undecided is how this product will be delivered.

It could be an extension in the VSCode Marketplace. It could power a learn to code website that's more dynamic in lesson generation and being hands-on than any code learning platform out there. It could be a standalone IDE that users download and use in place of VSCode. It could also be all 3.

I think it will take some time to untangle this and finalize the feature set and figure out which delivery formats to prioritize.

A good place to start might be sketching venn diagrams of the product variations, finding the intersection of these sample spaces, then beginning work on only the features and codebase needed for that intersection.

· · ·

Concluding Thoughts

This has been listed as "Project X" in my notes for a while now.

But I think I'll finally give my bastard son a name.

{ Raisin.IDE }

Why? Because 'raisin' sounds like the French for 'reason', and in the age of limitless knowledge, the only thing left to teach is reason.

Cheesy? Absolutely, and that's the way I like it.

✌️

Originally published on Stackademic. Cover photo by cottonbro studio from Pexels.