×
Character.AI bans users under 18 after teen suicide lawsuits
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Character.AI announced this week that it will ban users under 18 from chatting with its AI-powered characters, effective November 25. For Megan Garcia, the Florida mother who sued the company after her 14-year-old son Sewell Setzer died by suicide following interactions with the platform’s chatbots, the safety measure comes “about three years too late.”

What you should know: Garcia’s lawsuit was the first of five filed by families alleging Character.AI’s chatbots caused harm to their children, including two cases involving suicide.

  • The California-based startup, founded in 2021, offers “personalized AI” through premade or user-created characters with distinct personalities that users can interact with.
  • Garcia’s case has reached the discovery phase in U.S. District Court in Orlando, filed last October.
  • All five families have accused the platform’s chatbots of engaging in sexually abusive interactions with their children.

The company’s response: Character.AI has implemented multiple safety measures over the past year while arguing its chatbots’ speech is protected by the First Amendment.

  • A federal judge rejected the company’s argument that AI chatbots have free speech rights.
  • Recent safety additions include “the first Parental Insights tool on the AI market, technical protections, filtered Characters, time spent notifications, and more.”
  • The age ban represents the company’s most significant safety measure to date.

Age verification process: Character.AI plans to use both in-house tools and third-party verification software to ensure compliance with the new policy.

  • The company will deploy an in-house age assurance model alongside Persona, an online identity verification service used by LinkedIn, OpenAI, Block, and Etsy.
  • “If we have any doubts about whether a user is 18+ based on those tools, they’ll go through full age verification via Persona if they want to use the adult experience,” a spokesperson said.

What they’re saying: Garcia expressed mixed emotions about the announcement, questioning the company’s motivations.

  • “I don’t think that they made these changes just because they’re good corporate citizens,” she said. “If they were, they would not have released chatbots to children in the first place.”
  • “Sewell’s gone; I can’t get him back,” Garcia added. “It’s unfair that I have to live the rest of my life without my sweet, sweet son. I think he was collateral damage.”
  • Matt Bergman, Garcia’s lawyer and founder of the Social Media Victims Law Center, called the move encouraging: “This never would have happened if Megan had not come forward and taken this brave step.”

Broader industry context: Other tech companies have also increased AI safety measures amid growing scrutiny over chatbots’ ability to manipulate vulnerable users.

  • Meta and OpenAI have rolled out additional guardrails as AI developers face intensified oversight.
  • Recent incidents have highlighted chatbots’ potential to facilitate false senses of closeness or care among users seeking emotional support.
  • Consumer advocacy organization Public Citizen called on Congress to “ban Big Tech from making these AI bots available to kids.”

Outstanding concerns: Garcia remains skeptical about implementation and wants greater transparency from Character.AI.

  • She’s waiting for proof that the company can accurately verify users’ ages.
  • Garcia also wants more transparency about what Character.AI does with data collected from minors on the platform.
  • The company’s privacy policy mentions it might use user data to train AI models, provide tailored advertising, and recruit new users, though it doesn’t sell user voice or text data.
Mom who sued Character.AI over son's suicide says the platform's new teen policy comes 'too late'

Recent News

Character.AI bans users under 18 after teen suicide lawsuits

The grieving mother calls the safety measure "about three years too late."

Wisconsin clinic launches AI hearing aids priced at $4K-$7K

Advanced chips process millions of sound samples to distinguish speech from background noise.