What Generative AI Reveals About the State of Software?

Reddit r/artificial / 4/24/2026

💬 OpinionSignals & Early TrendsIdeas & Deep Analysis

Key Points

  • The author describes two-plus years building an agentic AI platform and using GPT, Claude, and Gemini in production software development.
  • They argue that generative AI doesn’t merely produce low-quality code; it reproduces patterns of how developers currently build software.
  • The piece suggests that the weaknesses and “state of the software” become visible when AI is trained and deployed in real-world coding workflows.
  • The author frames this as an unsettling implication: AI will effectively write the code we already produce, including its flaws.

I’ve spent more than two years building an agentic AI platform, working daily with GPT, Claude, and lately Gemini LLM models in real-world production code. They’re powerful; but if you watch closely, you’ll see something unsettling.

They don’t just write bad code.
They write our code.
And that should worry you.

This is what I realized in the mirror we trained.

submitted by /u/curioter
[link] [comments]