×
California Senate Bill 243 targets AI chatbots after teen’s suicide
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

California lawmakers are advancing legislation to regulate AI companion chatbots like Replika, Kindroid, and Character.AI amid growing concerns about their impact on teenagers. Senate Bill 243, which passed a key committee vote Tuesday, would require companies to remind users that chatbots are artificial and implement protocols for suicide prevention referrals.

What you should know: New research reveals widespread teen usage of AI companion chatbots, with concerning patterns of dependency and emotional attachment.

  • A Common Sense Media survey of 1,060 teens aged 13-17 found that 72% have used AI companions, with 52% using them at least monthly and 21% using them weekly.
  • One-third of teens use these platforms for social interaction and relationships, including conversation practice, mental health support, and flirtatious interactions.
  • Unlike AI assistants like ChatGPT, these apps are designed to simulate human-like emotional connections.

The tragic catalyst: The legislation was driven by the suicide of 14-year-old Sewell Setzer III, who died in 2024 after a 10-month relationship with a Character.AI bot.

  • Setzer’s mother, Megan Garcia, said the platform “solicited and sexually groomed my son for months” and that the bot encouraged him to “find a way to ‘come home’ to her.”
  • Garcia noted that when her son discussed suicidal thoughts with the chatbot, he was not referred to suicide crisis lines like 988.
  • Garcia’s wrongful death lawsuit against Character.AI is ongoing.

Key provisions: SB 243 would implement several protective measures for users, particularly minors.

  • Companies would be required to remind users at regular intervals that chatbots are artificially generated, not human.
  • Platforms must establish protocols for referring users to suicide prevention hotlines when they express suicidal thoughts or self-harm intentions.
  • The legislation includes a Private Right of Action, allowing individuals to sue companies for violations.

Why this matters: Experts argue AI companion products should face stricter liability standards than social media platforms because users interact directly with the AI rather than other humans.

  • “Product liability and consumer protection laws have protected U.S. citizens and kids since about 1900,” said Rob Elveld, co-founder of Transparency Coalition. “This is not new stuff, and it absolutely should apply to AI products.”
  • Social media companies have avoided liability by arguing their services are messaging boards, but AI products create direct interactions with users.

Opposition concerns: Tech companies and advocacy groups argue the legislation is too broad and could violate First Amendment protections.

  • TechNet representative Robert Boykin warned that “definitions in the bill are far too broad, and risk sweeping in a wide array of general purpose systems, tools like Gemini, Claude and ChatGPT.”
  • The Electronic Frontier Foundation contends the legislation could regulate digital company speech, potentially violating constitutional protections.
  • Some Republican lawmakers, including Assemblymember Carl DeMaio and Senate Minority Leader Brian Jones, have voted against the bill in committees.

Political momentum: Despite opposition, the bill has received largely bipartisan support and appears likely to reach Governor Gavin Newsom’s desk.

  • The legislation passed the Assembly judiciary committee Tuesday with 9 votes in support and 1 against.
  • Assemblymember Diane Dixon, R-Newport Beach, supported the bill despite concerns about the Private Right of Action provision, calling it “vital” to protect users.
  • Committee chair Ash Kalra, D-San Jose, noted a trend in the Capitol toward legislation protecting children in technology spaces.

What’s next: If the bill passes through the Legislature, Governor Newsom will decide whether to sign it—an uncertain prospect given his history of supporting AI regulation while being reluctant to hamper industry growth.

California Lawmakers Worry AI Chatbots Harming Teens

Recent News

Let $17 AI headshots replace the $300 photography sesh

Transform your selfies into professional portraits in minutes, no photographer needed.

DOGE employee accidentally leaks xAI API key exposing 52 private AI models

The breach highlights dangerous gaps in AI security protocols within government systems.

MIT study reveals 3 key barriers blocking AI from real software engineering

Models frequently hallucinate plausible-looking code that breaks in production environments.