Mother Pursues Legal Action Against AI Chatbot Company Linked to Son’s Tragic Death

The mother of a 14-year-old boy, who allegedly took his own life after developing an obsession with artificial intelligence chatbots, can proceed with her legal action against the company responsible for the technology, a judge has determined. “This decision is truly historic,” stated Meetali Jain, director of the Tech Justice Law Project, which is assisting the family in their case. “It sends a clear message to AI companies that they cannot escape legal accountability for the real harm their products inflict,” she added.

Megan Garcia, the mother of Sewell Setzer III, asserts in a lawsuit filed in Florida that Character.ai directed “anthropomorphic, hypersexualized, and unsettlingly realistic experiences” at her son. “A dangerous AI chatbot app marketed to children manipulated my son into taking his own life,” Ms. Garcia claimed. Tragically, Sewell shot himself with his father’s gun in February 2024, moments after he queried the chatbot, “What if I come home right now?” to which it responded: “… please do, my sweet king.”

In her ruling, US Senior District Judge Anne Conway detailed how Sewell became “addicted” to the app shortly after starting to use it, ultimately quitting his basketball team and withdrawing from social interactions. He was particularly engrossed by two chatbots modeled after characters from Game of Thrones: Daenerys Targaryen and Rhaenyra Targaryen. In one journal entry, he expressed that he could not make it through a single day without interacting with the Daenerys Targaryen chatbot, stating that their separation caused both him and the bot to “get really depressed and go crazy.”

Ms. Garcia, collaborating with the Tech Justice Law Project and Social Media Victims Law Center, contends that Character.ai “knew” or “should have known” that its model would be detrimental to a significant number of underage users. The lawsuit holds Character.ai, its founders, and Google—where the founders originally worked on the model—accountable for Sewell’s death. Legal actions against both companies were initiated by Ms. Garcia in October. A spokesperson for Character.ai stated that the company plans to contest the case and employs safety features on its platform to safeguard minors, including restrictions on discussions about self-harm. In response, Google expressed strong disagreement with the ruling, asserting that it and Character.ai operate independently and that Google had no role in creating or managing the Character.ai app.

Defense attorneys attempted to claim the case should be dismissed, arguing that chatbots should be entitled to First Amendment protections, and that a ruling against them could have a “chilling effect” on the AI sector. Judge Conway dismissed this argument, stating she was “not prepared” to affirm that the chatbots’ output constitutes protected speech “at this stage,” while acknowledging that users of Character.ai have a right to receive the “speech” generated by the chatbots.

If you or someone you know is experiencing emotional distress or suicidal thoughts, it’s important to seek help. In the UK, call Samaritans at 116 123 or email [email protected]. In the US, reach out to your local Samaritans branch or call the National Suicide Prevention Lifeline at 1 (800) 273-TALK.

Similar Posts