In what marks the tech industry’s first significant legal settlement over AI-related harm, Google and startup Character.AI are negotiating terms with families whose teens died by suicide or caused harm to themselves after interacting with Character.AI’s chatbot companions. The parties have agreed in principle to a settlement; Now comes the difficult task of finalizing the details.
These are among the first settlements in lawsuits accusing AI companies of harming users, a legal boundary that OpenAI and Meta will have to watch nervously as they defend themselves against similar lawsuits.
Character.AI was founded in 2021 by ex-Google engineers Returned In a deal worth $2.7 billion to his former employer in 2024, users were invited to chat with AI personalities. The most horrific case involves Sevel Setzer III, who had erotic conversations with the bot “Daenerys Targaryen” before killing himself at the age of 14. His mother Megan Garcia has told the Senate that companies “must be held legally accountable when they knowingly design harmful AI technologies that kill children.”
Another lawsuit describes a 17-year-old boy whose chatbot encouraged suicide and suggested it was justified to murder his parents limiting screen timeCharacters,AI banned minors last October, It told TechCrunchThe settlement will likely include monetary damages, though no liability was acknowledged in court filings available Wednesday,
TechCrunch has contacted both companies.

