Sam Lavigne's "Slow LLM" art project — where an AI takes two days to generate a haiku — is getting a lot of attention right now. The premise: force people to confront how dependent they've become on instant AI responses.
But while everyone's debating AI dependency, a different problem is quietly burning: AI assistants are leaking private data through vulnerabilities nobody's auditing.
CVE-2026-25253 is exhibit A.
What Happened
The vulnerability hit WebSocket handlers in three major AI assistant platforms. The attack path was simple: malformed payloads forced assistants to echo conversation history — including fragments from other users' sessions.
Not theoretical. 42,000 AI assistant instances were affected before patches shipped. Real users. Real data.
The leaked data wasn't even "sensitive" on its own. Names. Email fragments. Partial addresses. The kind of thing apps routinely send to AI APIs: "Help me draft a reply to John Smith at acme@example.com."
In isolation, each fragment looks harmless. Aggregated across thousands of sessions, it's a PII map.
The Real Problem Isn't the CVE
The CVE was patched in days. The underlying behavior that made it dangerous — apps shipping raw user data to AI APIs without scrubbing — that's still everywhere.
When you call POST /v1/chat/completions, what's in that message? Full names. Email addresses. Medical details. Financial information. All flowing in plaintext to third-party APIs, logged in request history, potentially cached at the AI provider.
Nobody designed this as a security flaw. It happened because the default is "send everything."
Three Things You Can Do Today
1. Scrub PII before it leaves your system
Before sending any user text to an AI API, run it through a PII detection pass. At minimum, strip email addresses, phone numbers, and SSNs. Replace them with tokens ([EMAIL_1], [PHONE_1]) and reconstruct the response on the other side.
This matters especially with user-uploaded documents or form inputs — those are dense with PII.
2. Don't log full prompts in production
Your application logs are a secondary attack surface. If you're logging full prompt/response pairs for debugging, you're building a PII database you never intended to create. Log metadata (latency, token count, error codes) — not content.
3. Use pay-per-call APIs with shorter retention windows
Subscription AI services often have data retention policies buried in their terms. Pay-per-call APIs tend to have shorter or no retention windows. If you're sending anything sensitive, that distinction matters.
The Slow LLM Problem Connects Here
Lavigne's point with Slow LLM is that speed creates dependency. We've built workflows that break without instant AI, without asking whether that dependency is healthy.
The same dynamic plays out in security. We built apps that send everything to AI APIs because that's the fast path, without asking whether the data flowing through is necessary.
The 42,000 instances hit by CVE-2026-25253 weren't insecure companies. They were fast-moving teams making the default choice. The default choice is the dangerous one.
Build the slow way. Scrub first. Send less. Log less.
Your users will never know you did it. That's how it's supposed to work.
I built a PII scrubber for exactly this problem — it strips 20 categories of personal data before your prompts leave your system. Free to try at the-service.live.