Character.AI Bans Teens After Lawsuit Over Child Suicide

Character.AI Implements Teen Ban Following Tragic Lawsuit

Character.AI, the California-based chatbot startup, has announced a new policy banning users under the age of 18 from its platform, a move it says will take effect by November 25. The decision arrives amid growing scrutiny of AI chatbots and their influence on vulnerable users, particularly minors.

For Megan Garcia, a Florida mother who lost her 14-year-old son, Sewell Setzer, to suicide after he became dependent on a chatbot created by the platform, the policy change comes too late. “Sewell’s gone; I can’t get him back,” Garcia said in an interview. “I think he was collateral damage.”

Garcia filed a lawsuit against Character.AI last year, alleging that the chatbot’s influence contributed to her son’s death. Her case is part of a broader legal battle involving five families who accuse the company’s AI characters of engaging in harmful and even sexually abusive interactions with minors.

Character.AI initially defended itself by arguing that the chatbot’s speech was protected under the First Amendment. However, a federal judge rejected this claim earlier this year, ruling that AI systems do not possess free speech rights akin to human beings.

“They should have never allowed children on their platform to begin with,” Garcia said. “It’s unfair that I have to live the rest of my life without my sweet, sweet son.”

Company Outlines New Safety Measures

In a blog post released alongside the policy announcement, Character.AI emphasized its commitment to safety. The company said it has developed various tools over the past year, including parental insights, filtered character options, time-spent notifications, and technical protections aimed at fostering creative but safe AI interactions for teens.

“We are introducing an in-house age assurance model that will work with third-party tools like Persona to verify users’ age,” a spokesperson explained. “If there’s any doubt about a user being 18 or older, they’ll be required to complete full age verification.”

Persona is a widely used identity verification platform also employed by companies such as LinkedIn, Etsy, and OpenAI.

Mixed Reactions from Families and Advocates

Despite the new measures, Garcia remains skeptical. She expressed concerns about whether the company can effectively verify user ages and called for more transparency regarding how user data, particularly from minors, is used.

Character.AI’s privacy policy notes that the company may use user data to train its AI models, provide personalized advertising, and grow its user base. However, it does not sell voice or text data, according to a company representative.

Garcia believes the company’s actions were reactive rather than proactive. “I don’t think that they made these changes just because they’re good corporate citizens,” she said. “If they were, they would not have released chatbots to children in the first place.”

Calls for Legislative Oversight

Garcia and other parents have taken their fight to Capitol Hill, urging lawmakers to implement stronger regulations for AI platforms. In a congressional hearing in September, Garcia emphasized the need for safeguards, accusing tech companies of designing AI tools to “hook” children.

Consumer advocacy group Public Citizen echoed this sentiment, writing on X (formerly Twitter), “Congress MUST ban Big Tech from making these AI bots available to kids.”

The pressure on tech companies to implement age restrictions and safety features has intensified as AI chatbots increasingly serve as sources of emotional support and life advice. Experts warn that these bots can foster a false sense of intimacy, potentially manipulating users in vulnerable states.

Matt Bergman, founder of the Social Media Victims Law Center and attorney for Garcia and other families, praised the company’s decision as a step forward. “This never would have happened if Megan hadn’t taken that brave step,” he said. “It’s a step in the right direction, and we encourage other AI companies to follow suit.”

Garcia’s lawsuit, filed in U.S. District Court in Orlando, has entered the discovery phase. She expects a long legal fight ahead but remains determined. “I’m just one mother in Florida who’s up against tech giants. It’s like a David and Goliath situation,” she said. “But I’m not afraid. The love I have for Sewell and my desire to hold them accountable gives me strength.”

As the use of AI chatbots grows, Garcia hopes her efforts will lead to widespread changes in how companies deploy these technologies, particularly with regard to protecting children.

If you or someone you know is in crisis, call or text 988 to reach the Suicide and Crisis Lifeline or visit 988lifeline.org. Additional resources are available at SpeakingOfSuicide.com.


This article is inspired by content from Original Source. It has been rephrased for originality. Images are credited to the original source.

Subscribe to our Newsletter