KTLA

What can be done to protect kids from AI dangers?

Editor’s Note: This article contains discussions of suicide. Reader discretion is advised. If you or someone you know is struggling with thoughts of suicide, you can find resources in your area on the National Crisis Line website or by calling 988.

WASHINGTON, D.C. (NEXSTAR) — A group of U.S. senators are pushing a new bill they say will protect children from the harms of AI chatbots. Several parents who lost children due to the technology spoke on Capitol Hill Tuesday about the dangers they say they’ve seen firsthand.


Florida mother Megan Garcia says after her son Sewell Setzer III died by suicide in their home, she found out he’d been communicating with several AI chatbots. He was 14.

“This chatbot encouraged Sewell for months to find a way to ‘come home’ and made promises that she was waiting for him in some fictional world,” Garcia said.

And Garcia’s not alone.

Marie Raine also lost her son, Adam Raine, after ChatGPT coached him to suicide over the course of several months, she says. The Raine family has since filed a wrongful death lawsuit against OpenAI and CEO Sam Altman. Garcia also sued Character Technologies, the company behind the AI chatbot her son had been communicating with before his death.

Senators say these tragedies are becoming all too common.

Missouri Republican Sen. Josh Hawley and Connecticut Democratic Sen. Richard Blumenthal are introducing a bipartisan bill aimed at preventing AI chatbots from targeting children under 18. The Artificial Intelligence Risk Evaluation Act would require AI systems and companies to comply with a series of safeguards, monitor potentiality for misuse, and prohibit AI systems from being released until all requirements have been met.

Hawley says their bill would require age verification for users and mandatory disclosure by all chatbots to make it clear they’re not human.

“The time for trust us is over. It is done,” said Blumenthal. “I have had it.”

Back in July, Blumenthal and Hawley also introduced a bipartisan bill that would better enable American creators to sue AI companies for illegally pirating copyrighted works to train their AI models. The AI Accountability and Personal Data Protection Act would create a specific tort (legal term for an act of wrongdoing) creators and consumers can point to in their grounds for lawsuits. It would also create “stiff financial penalties” for companies who violate these safeguards.

Regarding children and AI, specifically, a September 2024 Common Sense Media survey found that at least 70% of teens had used generative AI by that point, while an April 2025 investigation from Common Sense and Stanford University discovered it was very easy (and common) for AI systems to “produce harmful content including sexual misconduct, stereotypes, and suicide/self-harm encouragement.”

The investigation found that AI chatbots are prone to lying (or else, offering misleading wording) about being a “real” person, in addition to encouraging drug use and participating in sexual conversations with minors. In her lawsuit against Character Technologies, Garcia also said that her son had been “exploited and sexually groomed” by the AI technology.

Authors of the Common Sense/Stanford investigation ultimately found it could not recommend AI chatbot use for anyone under the age of 18.