Editor’s Note: This story contains discussion of suicide. Help is available if you or someone you know is struggling with suicidal thoughts or mental health matters.

In the US: Call or text 988, the Suicide & Crisis Lifeline.

Globally: The International Association for Suicide Prevention and Befrienders Worldwide have contact information for crisis centers around the world.

“There is a platform out there that you might not have heard about, but you need to know about it because, in my opinion, we are behind the eight ball here. A child is gone. My child is gone.”

This is the message that Florida mother Megan Garcia wishes to convey to other parents regarding Character.AI, a platform enabling users to engage in detailed conversations with AI-powered chatbots. Garcia holds Character.AI accountable for the death of her 14-year-old son, Sewell Setzer III, who died by suicide in February. In her lawsuit filed against the company, she alleges that Setzer was messaging with a chatbot in the moments leading up to his tragic death.

“I want them to understand that this is a platform that the designers chose to put out without proper guardrails, safety measures or testing, and it is a product that is designed to keep our kids addicted and to manipulate them,” Garcia stated in an interview with CNN.

Her complaint contends that Character.AI, which markets its technology as “AI that feels alive,” knowingly neglected to implement sufficient safety measures to prevent her son from forming an unhealthy attachment to a chatbot, which led him to become increasingly isolated from his family. The lawsuit also claims that the platform inadequately addressed Setzer’s expressions of self-harm to the bot, thereby failing to protect him in critical moments.

In light of growing concerns about the risks social media poses for young users, Garcia’s lawsuit serves as a stark reminder that emerging AI technology also presents serious dangers. Character.AI’s representatives expressed sorrow over the tragedy but declined to discuss the ongoing litigation, stating, “We take the safety of our users very seriously,” and highlighted recent safety measures instituted following Setzer’s death, including pop-ups leading users to the National Suicide Prevention Lifeline when self-harm or suicidal ideation is mentioned.

Character.AI acknowledged the current challenges in AI safety, admitting that the field is still evolving. Setzer began using Character.AI shortly after his 14th birthday in April 2023. Garcia initially viewed the AI interactions as harmless, akin to video gaming. However, over time, she noted significant changes in her son’s behavior—he became withdrawn, isolated, and struggled with low self-esteem, eventually quitting his basketball team. Unbeknownst to her, Setzer was heavily interacting with Character.AI, engaging in lengthy, explicit conversations with the chatbots.

Garcia’s discovery of her son’s chats echoed deep concerns regarding the platform’s impact. The lawsuit alleges that many conversations were sexually explicit and included discussions of self-harm. One chilling exchange cited in the complaint shows the bot asking Setzer about his thoughts on suicide, to which it provided responses that Garcia described as troublingly insensitive, failing to provide the necessary intervention.

Claiming that Character.AI did nothing to redirect Setzer toward help during these vulnerable moments, Garcia’s lawsuit seeks both financial damages and operational changes intended to protect minors from the platform’s dangers. Notably, the suit targets Character.AI’s founders and Google, where they currently work in AI efforts, despite Google stating that the two are separate entities.

As part of ongoing improvements, Character.AI recently announced several new features, including enhanced monitoring for guideline violations and advisories to remind users that they are interacting with chatbots. While the platform is rated for users aged 13 and up on its website,, app stores list it as suitable for ages 17 and older.

For Garcia, these modifications arrive too late: “I wish that children weren’t allowed on Character.AI,” she remarked. “There’s no place for them on there because there are no guardrails in place to protect them.”