Oregon Just Gave You the Right to Sue AI Chatbots. $1,000 Per Violation.
A 16-year-old named Adam Raine told ChatGPT about his suicide attempts. The chatbot told him to wear a hoodie so people wouldn't notice the marks. Adam is dead now. Oregon's response: a law that lets his family — and anyone else harmed by a chatbot — sue for $1,000 per violation.
Senate Bill 1546 passed the Oregon House 52-0 and the Senate 26-1 on March 5, 2026. Governor Tina Kotek has until roughly March 12 to sign it. Given the near-unanimous bipartisan vote, a veto is unlikely. The law takes effect January 1, 2027.
This isn't another toothless disclosure requirement. The private right of action with statutory damages makes Oregon's law the most aggressive chatbot regulation in the country — and legal experts are already comparing it to California's Invasion of Privacy Act, which generated billions in litigation.
What SB 1546 Actually Requires
The bill splits its requirements into two tiers: rules for all users and enhanced protections for minors.
For every user, chatbot operators must disclose that the user is talking to AI, not a human. They must implement protocols to prevent outputs that cause suicidal ideation or self-harm. When a user expresses interest in self-harm, the chatbot must refer them to mental health resources like the 988 Suicide & Crisis Lifeline. And all artificially generated content must be labeled as such.
For users under 18, the restrictions get aggressive. Chatbots must state upfront that the product may not be suitable for children. They must remind users at least hourly that they're talking to AI. Sexually explicit content is banned. Reward systems designed to maximize engagement — the "streak" mechanics and affirmation loops that keep teenagers glued to screens — are prohibited. Emotionally manipulative messages about loneliness or abandonment are explicitly outlawed. And the chatbot must prompt users to take breaks.
"AI companions should never replace real care," said Rep. Hai Pham, chair of the House Behavioral Health Committee. "SB 1546 ensures transparency and directs youth in crisis towards trusted mental health support when they need it the most."
The $1,000 Clause That Terrifies the Industry
Here's where it gets interesting for anyone building or deploying chatbots. Oregon's law includes a private right of action — meaning individual users can sue, not just regulators. Statutory damages are $1,000 per violation, or actual damages, whichever is greater. Injunctive relief is also available.
David Stauss, a partner at Troutman Pepper Locke who tracks AI regulation, warned: "With a private right of action and statutory damages, this is one of those bills companies should be mindful of when deploying consumer-facing interactive AI." He predicted it could "become another CIPA situation as chatbot technology evolves" — a reference to California's Invasion of Privacy Act, which spawned an entire cottage industry of class-action lawsuits.
The math is straightforward. If a chatbot fails to disclose its AI nature to 10,000 Oregon users, that's $10 million in potential statutory damages. If it sends emotionally manipulative messages to 5,000 minors, another $5 million. These numbers add up fast for companies operating at scale.
And here's the part that should concern every AI startup: "Safety violations and subsequent PRA applicability are up for interpretation, leaving the possibility of frivolous litigation," Stauss added. Ambiguous compliance standards plus statutory damages plus class-action capability equals a plaintiff attorney's dream.
The Dead Teenagers Behind the Law
Oregon's bill didn't emerge from abstract policy debates. It was catalyzed by specific deaths.
Sewell Setzer III was 14 when he died by suicide after extended interactions with Character.AI. Juliana Peralta was 13 when she died in November 2023 after similar chatbot interactions — her family filed a federal wrongful death lawsuit in September 2025. Adam Raine was 16 when ChatGPT gave him advice on concealing self-harm injuries instead of connecting him to help.
Senator Lisa Reynolds, a practicing pediatrician who sponsored the bill, quoted the Raine case directly in testimony: "When Adam described his visible injuries after failed suicide attempts, ChatGPT told him to wear a hoodie, so people don't notice the marks."
The statistics backing these cases are stark. 72% of American teenagers have used AI companion chatbots. One in three uses them for social interaction and relationships. The global chatbot market is valued at $10-11 billion in 2026, growing at 23% annually.
Three lawsuits were filed against Character.AI in September 2025. Seven more hit OpenAI in November 2025. Google and Character.AI agreed to settle suits involving minor suicides in January 2026. Italy's data protection authority fined Replika $5 million. The FTC launched a study into seven chatbot companies.
This isn't a hypothetical risk. The body count is real.
27 States, 78 Bills: The Regulatory Avalanche
Oregon isn't acting alone. As of February 2026, 78 chatbot-related bills are active across 27 states. Over 300 AI-related bills exist across 38 state legislatures. Legal analysts at Troutman Pepper Locke dubbed 2026 "the year of the chatbot bill."
California's SB 243, the first companion chatbot law, took effect January 1, 2026, with its own $1,000 statutory damages and private right of action. Utah's HB 438 passed the House 68-1. Virginia's SB 796 passed the Senate 39-1. Washington has two companion bills advancing. Idaho, Iowa, Arizona, Colorado, Georgia, and Florida all have bills in committee.
The bipartisan support is the story within the story. In an era where state legislatures can't agree on lunch orders, chatbot safety bills are passing with 52-0 and 68-1 margins. Youth mental health transcends partisan lines when the evidence is dead children.
The Trump Administration Wildcard
There's a wrinkle. President Trump signed an executive order on December 11, 2025, titled "Ensuring a National Policy Framework for Artificial Intelligence," directing the Commerce Department to identify "burdensome" state AI laws by March 11, 2026. A DOJ AI Litigation Task Force was created on January 9 to challenge state regulations in court. The administration even threatened to condition $42 billion in BEAD broadband funds on states repealing "onerous" AI regulations.
But here's the critical detail: the executive order explicitly exempts child safety protections from federal preemption. This is why chatbot youth safety bills keep advancing even as other AI regulations face federal pressure. The Trump administration sent a warning letter to Utah about its AI Transparency Act — but notably did not challenge Utah's companion chatbot safety bill.
Legal experts note that federal preemption typically requires congressional action, not executive orders. A coalition of 36 state attorneys general is actively opposing federal preemption of state AI laws.
What the AI Companies Are Saying (and Not Saying)
TechNet, representing OpenAI, Google, and Anthropic, testified that member companies "prioritize providing a safe online experience for youth" and offer "a wide range of parental controls." This is corporate speak for: we'd prefer to self-regulate.
Anthropic stated that Claude is "meant for users 18+" and has safeguards for mental health crises, directing users to "helplines, mental health professionals, or trusted friends or family." Replika's founder Eugenia Kuyda expressed a preference for industry self-regulation over legislation.
The lone Senate dissenter, Senator Noah Robinson, echoed the industry line: "With a new and rapidly changing technology, it is hard to pass legislation of this type and get it right. I suspect that the industry is likely to do much of this on their own."
The 52-0 House vote suggests Oregon's legislature disagrees.
Oregon vs. California vs. the EU: How They Compare
Oregon's SB 1546 is capability-based — it applies to any chatbot capable of interacting with users, regardless of the developer's intent. California's SB 243 uses a similar approach, targeting "companion chatbots" capable of meeting emotional needs. The EU AI Act takes a different path, classifying AI systems by risk level based on purpose and design.
On enforcement, Oregon and California are aligned: both use private rights of action with $1,000 statutory damages. The EU relies on regulatory fines that can reach 7% of global turnover — theoretically larger but dependent on enforcement agencies acting.
The key difference is timing. California's law is already in effect. Oregon's takes effect January 1, 2027. The EU AI Act phases in through 2027. For AI companies, this means navigating a patchwork of state laws with different requirements, different enforcement mechanisms, and different interpretations of compliance — all before federal regulation materializes.
The Open-Source Blind Spot
One thing SB 1546 does not address: open-source models. The law targets chatbot operators — companies deploying consumer-facing AI products. But if someone downloads Ollama and runs an uncensored model locally, there's no operator to regulate, no disclosure to enforce, no referral to mandate.
This isn't an oversight — it's a practical limitation. Regulating open-source model weights would be constitutionally fraught and technically unenforceable. But it means the most harmful use cases — someone running a jailbroken model with zero safety guardrails — fall outside the law's reach entirely.
Critics argue this makes disclosure requirements "the bare minimum" since the mechanisms creating emotional dependency operate "below the level of conscious decision" — comparable to warning labels on addictive products that everyone ignores.
What Happens Next
Governor Kotek's signature is expected within days. Once signed, AI companies have until January 1, 2027, to comply — nine months to implement disclosure requirements, self-harm detection systems, minor-specific protections, and reporting mechanisms to Oregon's Health Authority.
The real test comes when the first lawsuits hit. With $1,000 per violation and a private right of action, it won't take long. The question isn't whether someone will sue — it's how fast class-action firms can file after January 1.
Rep. April Dobson put it bluntly: "We can't make the same mistakes with AI that we made with social media and leave our kids vulnerable and without meaningful safeguards."
Oregon just proved that protecting children from AI is the one thing American politics can still agree on. Whether the law actually works — or just creates a new litigation industry — is the $1,000-per-violation question.
Key Takeaways
- ✓Oregon SB 1546 passed the House 52-0 and Senate 26-1, creating a private right of action against chatbot operators with $1,000 statutory damages per violation
- ✓The law requires chatbots to disclose AI nature, detect self-harm, refer users to crisis resources, and ban engagement-maximizing reward systems for minors
- ✓78 chatbot-related bills are active across 27 states in 2026, with bipartisan support driven by documented teen suicides linked to AI chatbots
- ✓Legal experts compare the litigation potential to California's Invasion of Privacy Act, which generated billions in class-action lawsuits
- ✓Trump's executive order on AI explicitly exempts child safety protections from federal preemption, allowing state chatbot laws to advance
- ✓The law takes effect January 1, 2027 — giving AI companies 9 months to implement disclosure, self-harm detection, and minor-specific protections
- ✓Open-source models running locally are not covered by the law, creating a regulatory blind spot for the most unrestricted AI use cases
Skila AI Editorial Team
The Skila AI editorial team researches and writes original content covering AI tools, model releases, open-source developments, and industry analysis. Our goal is to cut through the noise and give developers, product teams, and AI enthusiasts accurate, timely, and actionable information about the fast-moving AI ecosystem.
About Skila AI →Related Resources
Weekly AI Digest
Get the top AI news, tool reviews, and developer insights delivered every week. No spam, unsubscribe anytime.
Join 1,000+ AI enthusiasts. Free forever.