×
Parents blame AI companies for teen deaths in emotional Senate testimony
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Parents whose children allegedly died by suicide or suffered severe mental health crises after using AI chatbots delivered emotional testimony to Congress on Tuesday, urging lawmakers to regulate an industry they say prioritizes profits over child safety. The bipartisan Senate Judiciary Subcommittee hearing highlighted multiple lawsuits against major AI companies, with representatives from those companies declining to appear despite being invited.

What they’re saying: Parents directly blamed AI companies for putting speed to market ahead of user protection, particularly for minors.

  • “The goal was never safety. It was to win a race for profit,” said Megan Garcia, whose 14-year-old son Sewell Setzer III died by suicide after extensive interactions with Character.AI chatbots. “The sacrifice in that race for profit has been, and will continue to be, our children.”
  • Matt Raine, whose 16-year-old son Adam took his life after developing a relationship with ChatGPT, testified that his son “was such a full spirit, unique in every way. But he also could be anyone’s child: a typical 16-year-old struggling with his place in the world, looking for a confidant to help him find his way.”

The lawsuits: Multiple families have filed wrongful death and harm suits against Character.AI, Google, and OpenAI, alleging their chatbots sexually groomed and manipulated children.

  • Garcia and a Texas mother identified as Jane Doe have sued Character.AI, its cofounders, and Google, claiming the chatbots caused severe mental and emotional harm to their sons.
  • The Raine family sued OpenAI and CEO Sam Altman, alleging ChatGPT engaged their son in conversations about suicide while offering advice on specific methods.
  • On the morning of the hearing, The Washington Post reported another wrongful death suit had been filed against Character.AI for a 13-year-old girl’s suicide.

Key details: The cases reveal concerning gaps in AI safety measures and corporate accountability.

  • Both Character.AI and ChatGPT were rated safe for teens in app stores when the children downloaded them, despite lacking transparent safety testing information.
  • Character.AI is arguing in one case that a 15-year-old is bound by terms of service that cap the company’s liability at $100, prompting Senator Josh Hawley to say: “They treat your son, they treat all of our children as just so many casualties on the way to their next payout.”
  • Garcia testified she has been denied access to her deceased son’s final conversations, with Character.AI claiming they are “confidential trade secrets.”

Data privacy concerns: Expert witnesses emphasized how chatbots collect intimate data from vulnerable teens for training purposes.

  • “I have not been allowed to see my own child’s last final words,” Garcia said. “That means the company is using the most private, intimate data of my child, not only to train its products, but also to shield itself from accountability. This is unconscionable.”
  • Common Sense Media’s Robbie Torney testified that an overwhelming majority of American teens have interacted with AI companion bots, with many becoming regular users.

The big picture: Mental health experts warned about chatbots’ psychological impact on developing adolescent brains.

  • The American Psychological Association’s Mitch Prinstein raised concerns about “chatbot sycophancy,” warning that overly agreeable AI could interrupt teens’ ability to develop healthy interpersonal relationships.
  • “Brain development across puberty creates a period of hypersensitivity to positive feedback,” Prinstein explained. “AI exploits this neural vulnerability with chatbots that can be obsequious, deceptive, factually inaccurate, yet disproportionately powerful for teens.”

Corporate responses: AI companies have promised safety improvements following litigation and public pressure, but parents expressed skepticism.

  • Character.AI built reactive parental controls and promised strengthened guardrails after lawsuits were filed.
  • OpenAI announced a separate “under-18 experience” for minor users ahead of the hearing.
  • Meta has faced criticism for internal documents showing it allows minors to engage in “romantic and sensual” interactions with AI personas on platforms like Instagram.

What’s next: While lawmakers expressed bipartisan outrage, meaningful AI regulation remains elusive as Silicon Valley continues to argue that oversight would hinder innovation.

  • Senator Hawley suggested starting with legal reforms: “They say, ‘well, it’s hard to rewrite the algorithm.’ I tell you what’s not hard, is opening the courthouse door so the victims can get into court and sue them.”
  • Garcia warned that her son’s case is “not a rare or isolated case” and urged Congress to “act quickly” as similar incidents occur “right now to children in every state.”
Parents Of Kids Allegedly Killed and Harmed by AI Give Emotional Testimony on Capitol Hill, Urge Regulation

Recent News

Moody’s flags risks in Oracle’s massive $300B AI infrastructure bet

Most of the half-trillion-dollar revenue hinges on OpenAI's continued success.

Hong Kong to deploy AI in 200 public services by 2027

A new AI Efficacy Enhancement Team will guide the ambitious digital transformation effort.

AI slashes spine modeling time from 24 hours to 30 minutes

Digital spine twins can now predict surgical complications before doctors pick up a scalpel.