AI went viral among attorneys. We have the numbers on what happened next

The Register / 4/13/2026

💬 OpinionSignals & Early TrendsIndustry & Market Moves

Key Points

  • The article looks at how AI content spread rapidly within the legal profession, reframing “viral” as a fast-moving, adoption-style phenomenon rather than a meme-like trend.
  • It reports quantitative findings on what attorneys did after the initial AI attention, including subsequent engagement and behavior shifts.
  • The piece emphasizes that the legal sector’s interaction with AI is accelerating enough to warrant “a vaccine,” suggesting urgency around governance, training, and risk controls.
  • It presents the data as evidence that AI interest among attorneys translates into measurable follow-on activity rather than staying purely informational.

AI went viral among attorneys. We have the numbers on what happened next

Not viral as in cat videos. Viral as in we need a vaccine

Mon 13 Apr 2026 // 08:44 UTC

Opinion For a sector at the heart of US economic growth, AI claims and counter-claims remain curiously hard to reconcile. Models are improving at the speed of light, AI firms claim, yet the message from the codeface remains that benefits are still more than balanced by the downsides.

AI can make you a 10x coder, if you spend 10x time in preparation, wrangling, and error checking. You can deploy AI agents, as long as you deploy other AI agents to watch them. AI-generated code needs AI-generated tests to cope with increased volume, and look at what that does to infrastructure stress.

Given the stakes, it's not surprising that polarized opinion and commercially mandated claims create quite the fog over the AI-coding battlefield. It's much worse in other sectors, where AI has fewer quality metrics and visible critical analyses. If only there was a well-defined area with a long history of rules-based data quality dependency, transparency, and frameworks for enforcing truth and professional standards. If that saw a sudden uptake in AI usage, we could see exactly what the tech actually does.

The good news: we have exactly that. It's called the legal system. The bad news: it's not going well.

The root cause is the same combination of two factors that go very badly together, and it's a familiar equation. AI is exceptionally good at producing structured documents that look, and mostly are, as if generated by a human expert. AI also generates and incorporates hallucinatory facts that have the exact look and feel of reality apart from one small flaw: they're false.

This is all known well enough. The consequences are also universally acknowledged. AI's utility is very much damaged by hallucinations, and thus its output needs extensive checking. But it looks so good, so convincing, that it is very human to accept the AI promise of vastly improved productivity and hope for the best.

The result in legal systems around the world is fake cases. Lawyers making arguments in court rely on logical arguments backed up by existing case law. This comes in written depositions, where cases that agree with the argument are quoted or summarized in context, and citations provided to those cases in archives.

It was inevitable that lawyers would use AI to write depositions, prompting for the outcome they wished to prove. It was also inevitable that the AI would hallucinate cases that seemed to help the case being made. It was further inevitable that some of these would evade fact checking and end up in court. This first happened, or at least got widely noticed, in a case in the Southern District of New York in 2023. What did not seem inevitable and seems even more extraordinary is what happened next.

Lawyers have a special relationship with the legal system and with courts in particular. If they present facts, they are under strict obligation to be truthful and to have made appropriate efforts to verify what they say. They are officers of the court, and bound by professional ethics. Lawyers are also widely rumored to be human and make mistakes but they are also expected to learn from those and not repeat them. The first lawyer to put their signature on an AI-tainted deposition could plead that it was a new technology and seductively efficient, and the court may be minded to issue a stern warning and leave it at that.

Once that news got out among the legal community, a community exceptionally versed in gossiping about itself, the excuse from ignorance would not wash. You'd expect the incidence of fake cases to carry on as low-level noise, with the odd chancer trying it on, but everyone knowing the repercussions of being caught cheating by an institution jealously guarding its sanctity with effectively infinite powers of sanction.

What has happened is more akin to the early stages of plague. Six months after that first high-profile US case, another caught m'learneds' attention in a London tribunal. Last week, NPR reported that the business school HEC Paris has recorded some 1,200 cases involving hallucinations from around the world, with 800 from the US alone. Ten cases from ten different jurisdictions came in recently on the same day. The rate, they say, is still increasing.

This is despite some cases going beyond high profile, and courts everywhere ratcheting up the immune system response, fining lawyers with six figure sums. There are also proposals to require labeling of AI-generated documents, which will probably go as well as you might imagine.

The legal profession has a long tradition of making junior employees work very hard with limited resources or support from seniors. In at least one case, the underling was told to use AI to generate a brief but was not given access to the legal database they needed to check cases. Saves money, right? That the legal profession can be as exploitative as any is no surprise. That it cannot help itself but get a taste for AI that overwhelms its judgment as surely as a nose full of cocaine is seemingly indicative of how dangerous AI can be. That the problem is getting worse is also a good indication that whatever the new models do better, hallucinations ain't going away.

Responsible lawyers known to The Reg report that using AI needs as much time to verify as it saves, but that it's still worthwhile if used judiciously. This doesn't match AI hype, but it clearly matches the truth. It's also not hard to imagine mechanisms to automate case-citation checking, but there we go again on AI demanding more to do its job.

Keep a close eye on legal hallucinations. We're still in the early stages of the epidemic, and while the smart money will be on the legal system shutting down the problem before AI is fixed, it will be shut down. That will leave us with the question of how much damage AI is doing in other sectors, other organizations, where cover-your-ass behavior takes the place of professional ethics, and mutual blind eyes trump transparency and truth. Let's hope we can fix that before complacency turns into a crisis. ®

More about

TIP US OFF

Send us news