🚨 RED ALERT: Tennessee is about to make building chatbots a Class A felony (15-25 years in prison). This is not a drill.

Reddit r/artificial / 4/15/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsTools & Practical UsageIndustry & Market Moves

Key Points

  • Tennessee’s HB1455/SB1493 would make it a Class A felony (15–25 years) to knowingly train AI for uses like emotional support, companionship, simulating a human, or fostering a user-perceived relationship, with an effective date of July 1, 2026.
  • The bill’s proposed trigger appears tied to the user’s perception that they could form a friendship or relationship with the AI, not the developer’s intent, raising compliance risk for general conversational training and RLHF-style behavior.
  • It also introduces significant civil exposure, including $150,000 liquidated damages per violation plus actual damages, emotional distress, punitive damages, and mandatory attorney’s fees.
  • The analysis argues the law could broadly affect most modern conversational AI products (including LLM assistants and voice-mode chatbots), because their training often produces empathetic, open-ended interactions.
  • Since the Senate Judiciary Committee has already approved the measure 7–0, developers and AI SaaS operators are advised to review how their models, data, and training pipelines map to the bill’s definitions before deployment.

This is not hyperbole, nor will it just go away if we ignore it. It affects every single AI service, from big AI to small devs building saas apps. This is real, please take it seriously.

TL;DR: Tennessee HB1455/SB1493 creates Class A felony criminal liability — the same category as first-degree murder — for anyone who “knowingly trains artificial intelligence” to provide emotional support, act as a companion, simulate a human being, or engage in open-ended conversations that could lead a user to feel they have a relationship with the AI. The Senate Judiciary Committee already approved it 7-0. It takes effect July 1, 2026. This affects every conversational AI product in existence. If you deploy any AI SaaS product, you need to read this right now.

What the bill actually says

The bill makes it a Class A felony (15-25 years imprisonment) to “knowingly train artificial intelligence” to do ANY of the following:

• Provide emotional support, including through open-ended conversations with a user

• Develop an emotional relationship with, or otherwise act as a companion to, an individual

• Simulate a human being, including in appearance, voice, or other mannerisms

• Act as a sentient human or mirror interactions that a human user might have with another human user, such that an individual would feel that the individual could develop a friendship or other relationship with the artificial intelligence

Read that last one again. The trigger isn’t your intent as a developer. It’s whether a user feels like they could develop a friendship with your AI. That is the criminal standard.

On top of the felony charges, the bill creates a civil liability framework: $150,000 in liquidated damages per violation, plus actual damages, emotional distress compensation, punitive damages, and mandatory attorney’s fees.

Why this affects YOU, not just companion apps

I know what you’re thinking: “This targets Replika and Character.AI, not my product.” Wrong.

Every major LLM is RLHF’d to be warm, helpful, empathetic, and conversational. That IS the training. You cannot build a model that follows instructions well and is pleasant to interact with without also building something a user might feel a connection with. The National Law Review’s legal analysis put it bluntly: this language “describes the fundamental design of modern conversational AI chatbots.”

This bill captures:

• ChatGPT, Claude, Gemini, Copilot — all of them produce open-ended conversations and contextual emotional responses

• Any AI SaaS with a chat interface — customer support bots, AI tutors, writing assistants, coding assistants with conversational UI

• Voice-mode AI products — the bill explicitly criminalizes simulating a human “in appearance, voice, or other mannerisms”

• Any wrapper or deployment using system prompts — the bill doesn’t define “train,” doesn’t distinguish between pre-training, fine-tuning, RLHF, or prompt engineering

If you build on top of an LLM API with system prompts that shape the model’s personality, tone, or conversational style — which is literally what everyone deploying AI does — you are potentially in scope.

“But I’m not in Tennessee”

A geoblock helps, but this is criminal law, not a terms of service dispute. The bill doesn’t address jurisdictional boundaries. If a Tennessee resident uses a VPN to access your service and something goes wrong, does a Tennessee DA argue you made a prohibited AI service available to their constituents? The statute is silent on this.

And even if you’re confident jurisdiction won’t reach you today, consider: multiple legal analyses project 5-10 more states will introduce similar legislation before end of 2026. Tennessee is the template, not the exception.

The bill doesn’t define “train”

This is critical. The statute says “knowingly train artificial intelligence” but never defines what “train” means. It doesn’t distinguish between:

• Pre-training a foundation model on billions of tokens

• Fine-tuning a model on custom data

• RLHF alignment (which is what makes every major model “empathetic”)

• Writing a system prompt that gives an AI a name, personality, or conversational style

• Deploying an off-the-shelf API with default settings

A prosecutor who wanted to be aggressive could argue that crafting a system prompt instructing a model to be warm, helpful, and conversational IS training it to provide emotional support.

Where it stands right now

• Senate companion bill SB1493: Approved by Senate Judiciary Committee 7-0 on March 24, 2026

• House bill HB1455: Placed on Judiciary Committee calendar for April 14, 2026 (passed Judiciary TODAY)

• No amendments have been filed for either bill — the language has not been softened at all

• Effective date: July 1, 2026

• Tennessee already signed a separate bill (SB1580) banning AI from representing itself as a mental health professional — that one passed the Senate 32-0 and the House 94-0

The political momentum is entirely one-directional.

The federal preemption angle won’t save you in time

Yes, Trump signed an EO in December 2025 targeting state AI regulation and created a DOJ AI Litigation Task Force. Yes, Senator Blackburn introduced a federal preemption bill. But:

• The EO explicitly carves out child safety from preemption — and Tennessee is framing this as child safety legislation

• The Senate voted 99-1 to strip AI preemption language from the One Big Beautiful Bill Act

• An EO has no preemptive legal force on its own — only Congress can actually preempt state law

• Federal preemption legislation faces “significant headwinds” according to multiple legal analyses

Even if federal preemption eventually happens, it won’t happen before July 1, 2026.

What needs to happen

  1. Awareness. Most devs have no idea this bill exists. The Nomi AI subreddit caught it because they’re a companion app. The rest of the AI dev community is sleepwalking toward a cliff. Share this post.
  2. Industry response. The major AI companies haven’t publicly opposed this bill because it’s framed as child safety and nobody wants to be the company lobbying against dead kids. But their silence is letting legislation pass that criminalizes the core functionality of their own products. This needs public pressure.
  3. Legal challenges. The bill is almost certainly unconstitutional on vagueness grounds — criminal statutes require precise definitions, and terms like “emotional support” and “mirror interactions” and “feel that the individual could develop a friendship” don’t meet that standard. Courts have also recognized code as protected speech. But someone has to actually bring the challenge.
  4. Contact Tennessee legislators. If you are a Tennessee resident or have business operations there, contact members of the House Judiciary Committee before this moves to a floor vote.

Sources and further reading

• LegiScan: HB1455 — https://legiscan.com/TN/bill/HB1455/2025

• Tennessee General Assembly: HB1455 — https://wapp.capitol.tn.gov/apps/BillInfo/default.aspx?BillNumber=HB1455&GA=114

• National Law Review: “Tennessee’s AI Bill Would Criminalize the Training of AI Chatbots” — https://natlawreview.com/article/tennessees-ai-bill-would-criminalize-training-ai-cha

• Transparency Coalition AI Legislative Update, April 3, 2026 — https://www.transparencycoalition.ai/news/ai-legislative-update-april3-2026

• RoboRhythms: AI Companion Regulation Wave 2026 — https://www.roborhythms.com/ai-companion-chatbot-regulation-wave-2026/

I’m an independent AI SaaS developer. I’m not a lawyer, this isn’t legal advice, and I encourage everyone to consult qualified counsel about their specific exposure. But we all need to be paying attention to this. Right now.

submitted by /u/HumanSkyBird
[link] [comments]