Anyone else feel like AI security is being figured out in production right now?

Reddit r/artificial / 4/4/2026

💬 OpinionSignals & Early TrendsIdeas & Deep Analysis

Key Points

  • Production AI security incidents are increasingly driven by relatively “basic” issues such as prompt injection via external data, overly-permissive agents, and uncontrolled/unsanctioned employee use of AI tools.
  • The article highlights a widening credential-leak trend tied to AI usage and argues that AI accelerates attackers’ ability to find and exploit weaknesses.
  • A core gap is organizational: only a small fraction of companies have dedicated AI security teams, and AI security is often not owned by traditional security functions.
  • Traditional security intuition partially transfers (e.g., trust boundaries) but does not fully map to AI-specific failure modes like prompt injection and agent permission models.
  • While frameworks and references (OWASP LLM/agent Top 10, MITRE ATLAS, NIST AI risk guidance) are emerging, the author stresses the shortage of practitioners who can apply them effectively and calls for more hands-on learning.

I’ve been digging into AI security incident data from 2025 into this year, and it feels like something isn’t being talked about enough outside security circles.

A lot of the issues aren’t advanced attacks. It’s the same pattern we’ve seen with new tech before. Things like prompt injection through external data, agents with too many permissions, or employees using AI tools the company doesn’t even know about. One stat I saw said enterprises are averaging 300+ unsanctioned AI apps, which is kind of wild.

The incident data reflects that. Prompt injection is showing up in a large percentage of production deployments. There’s also been a noticeable increase in attacks exploiting basic gaps, partly because AI is making it easier for attackers to find weaknesses faster. Even credential leaks tied to AI usage have been increasing.

What stood out to me isn’t just the attacks, it’s the gap underneath it. Only a small portion of companies actually have dedicated AI security teams. In many cases, AI security isn’t even owned by security teams.

The tricky part is that traditional security knowledge only gets you part of the way. Some concepts carry over, like input validation or trust boundaries, but the details are different enough that your usual instincts don’t fully apply. Prompt injection isn’t the same as SQL injection. Agent permissions don’t behave like typical API auth.

There are frameworks trying to catch up. OWASP now has lists for LLMs and agent-based systems. MITRE ATLAS maps AI-specific attack techniques. NIST has an AI risk framework. The guidance exists, but the number of people who can actually apply it feels limited.

I’ve been trying to build that knowledge myself and found that more hands-on learning helps a lot more than just reading docs.

Curious how others here are approaching this. If you’re building or working with AI systems, are you thinking about security upfront or mostly dealing with it after things are already live?

Sources for those interested:

AI Agent Security 2026 Report

IBM 2026 X-Force Threat Index

Adversa AI Security Incidents Report 2025

Acuvity State of AI Security 2025

OWASP Top 10 for LLM Applications

OWASP Top 10 for Agentic AI

MITRE ATLAS Framework

submitted by /u/HonkaROO
[link] [comments]