By Malathi Nayak | Bloomberg
Megan Garcia says her son would nonetheless be alive in the present day if it weren’t for a chatbot urging the 14-year-old to take his personal life.
In a lawsuit with main implications for Silicon Valley, she is searching for to carry Google and the factitious intelligence agency Character Applied sciences Inc. liable for his dying. The case over the tragedy that unfolded a yr in the past in central Florida is an early take a look at of who’s legally in charge when children’ interactions with generative AI take an surprising flip.
REALTED: Bay Space AI security professional talks about the way forward for the know-how
Garcia’s allegations are specified by a 116-page criticism filed final yr in federal courtroom in Orlando. She is searching for unspecified financial damages from Google and Character Applied sciences and asking the courtroom to order warnings that the platform isn’t appropriate for minors and restrict the way it can accumulate and use their information.
Each firms are asking the decide to dismiss claims that they failed to make sure the chatbot know-how was secure for younger customers, arguing there’s no authorized foundation to accuse them of wrongdoing.
Character Applied sciences contends in a submitting that conversations between its Character.AI platform’s chatbots and customers are protected by the Structure’s First Modification as free speech. It additionally argues that the bot explicitly discouraged Garcia’s son from committing suicide.
Garcia’s focusing on of Google is especially vital. The Alphabet Inc. unit entered right into a $2.7 billion cope with Character.AI in August, hiring expertise from the startup and licensing know-how with out finishing a full-blown acquisition. Because the race for AI expertise accelerates, different firms might imagine twice about equally structured offers if Google fails to persuade a decide that it needs to be shielded from legal responsibility from harms alleged to have been attributable to Character.AI merchandise.
“The inventors and the companies, the corporations that put out these products, are absolutely responsible,” Garcia mentioned in an interview. “They knew about these dangers, because they do their research, and they know the types of interactions children are having.”
Earlier than the deal, Google had invested in Character.AI in alternate for a convertible observe and in addition entered a cloud service pact with the startup. The founders of Character.AI had been Google workers till they left the tech behemoth to discovered the startup.
As Garcia tells it in her go well with, Sewell Setzer III was a promising highschool pupil athlete till he began in April, 2023 role-playing on Character.AI, which lets customers construct chatbots that mimic fashionable tradition personalities — each actual and fictional. She says she wasn’t conscious that over the course of a number of months, the app hooked her son with “anthropomorphic, hypersexualized and frighteningly realistic experiences” as he fell in love with a bot impressed by Daenerys Targaryen, a personality from the present Recreation of Thrones.
Garcia took away the boy’s telephone in February 2024 after he began performing out and withdrawing from associates. However whereas on the lookout for his telephone, which he later discovered, he additionally got here throughout his stepfather’s hidden pistol, which the police decided was saved in compliance with Florida legislation, in line with the go well with. After conferring with the Daenerys chatbot 5 days later, the teenager shot himself within the head.
Garcia’s attorneys say within the criticism that Google “contributed financial resources, personnel, intellectual property, and AI technology to the design and development” of Character.AI’s chatbots. Google argued in a courtroom submitting in January that it had “no role” within the teen’s suicide and “does not belong in the case.”The case is enjoying out as public questions of safety round AI and youngsters have drawn consideration from state enforcement officers and federal businesses alike. There’s presently no US legislation that explicitly protects customers from hurt inflicted by AI chatbots.
To make a case towards Google, attorneys for Garcia must present the search large was truly working Character.AI and made enterprise selections that finally led to her son’s dying, in line with Sheila Leunig, an lawyer who advises AI startups and buyers and isn’t concerned within the lawsuit.
“The question of legal liability is absolutely a valid one that’s being challenged in a huge way right now,” Leunig mentioned.
Offers just like the one Google struck have been hailed as an environment friendly means for firms to usher in experience for brand spanking new tasks. Nonetheless, they’ve caught the eye of regulators over considerations they’re a work-around to antitrust scrutiny that comes with buying up-and-coming rivals outright — and which has turn out to be a serious headache for tech behemoths in recent times.
“Google and Character.AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies, nor have we used them in our products,” José Castañeda, a spokesperson for Google, mentioned in a press release.
A Character.AI spokeswoman declined to touch upon pending litigation however mentioned ”there isn’t any ongoing relationship between Google and Character.AI” and that the startup had applied new consumer security measures over the previous yr.
Attorneys from the Social Media Victims Regulation Middle and Tech Justice Regulation Mission who signify Garcia argue that despite the fact that her son’s dying pre-dates Google’s cope with Character.AI, the search firm was “instrumental” in serving to the startup design and develop its product.

“The model underlying Character.AI was invented and initially built at Google,” in line with the criticism. Noam Shazeer and Daniel De Freitas started working at Google on chatbot know-how way back to 2017 earlier than they left the corporate in 2021, then based Character.AI later that yr and had been rehired by Google final yr, in line with Garcia’s go well with, which names them each as defendants.
Shazeer and De Freitas declined to remark, in line with Google’s spokesperson Castañeda. They’ve argued in courtroom filings that they shouldn’t have been named within the go well with as a result of they don’t have any connections to Florida, the place the case was filed, and since they weren’t personally concerned within the actions that allegedly brought on hurt.
The go well with additionally alleges the Alphabet unit helped market the startup’s know-how by way of a strategic partnership in 2023 to make use of Google Cloud companies to succeed in a rising variety of lively Character.AI customers — which is now greater than 20 million.
Within the fast-growing AI trade, startups are being “boosted” by large tech firms, “not under the brand name of the large company, but with their support,” mentioned Meetali Jain, director of Tech Justice Regulation Mission.
Google’s “purported roles as an ‘investor,’ cloud services provider, and former employer are far too tenuously connected” to the hurt alleged in Garcia’s criticism “to be actionable,” the know-how large mentioned in a courtroom submitting.
Matt Wansley, a professor at Cardozo Faculty of Regulation, mentioned tying legal responsibility again to Google gained’t be simple.
“It’s tricky because, what would the connection be?” he mentioned.
Early final yr, Google warned Character.AI that it’d take away the startup’s app from the Google Play retailer over considerations about security for teenagers, the Data reported not too long ago, citing an unidentified former Character.AI worker. The startup responded by strengthening the filters in its app to guard customers from sexually suggestive, violent and different unsafe content material and Google reiterated that it’s “separate” from Character.AI and isn’t utilizing the chatbot know-how, in line with the report. Google declined to remark and Character.AI didn’t reply to a request from Bloomberg for touch upon the report.
Garcia, the mom, mentioned she first discovered about her son interacting with an AI bot in 2023 and thought it was much like constructing online game avatars. Based on the go well with, the boy’s psychological well being deteriorated as he spent extra time on Character.AI the place he was having sexually specific conversations with out his dad and mom’ information.
When the teenager shared his plan to kill to himself with the Daenerys chatbot, however expressed uncertainty that it might work, the bot replied: “That’s not a reason not to go through with it,” in line with the go well with, which is peppered with transcripts of the boy’s chats.
Character.AI mentioned in a submitting that Garcia’s revised criticism ”selectively and misleadingly quotes” that dialog and excludes how the chatbot “explicitly discouraged” the teenager from committing suicide by saying: “You can’t do that! Don’t even consider that!”
Anna Lembke, a professor at Stanford College Faculty of Medication specializing in dependancy, mentioned “it’s almost impossible to know what our kids are doing online.” The professor additionally mentioned it’s unsurprising that the boy’s interactions with the chatbot didn’t come up in a number of classes with a therapist who his dad and mom despatched him to for assist together with his nervousness, because the lawsuit claims.
“Therapists are not omniscient,” Lembke mentioned. “They can only help to the extent that the child knows what’s really going on. And it could very well be that this child did not perceive the chatbot as problematic.”
The case is Garcia v. Character Applied sciences Inc., 24-cv-01903, US District Courtroom, Center District of Florida (Orlando).
In case you or somebody you understand is battling emotions of despair or suicidal ideas, the 988 Suicide & Disaster Lifeline provides free, round the clock help, data and sources for assist. Name or textual content the lifeline at 988, or see the 988lifeline.org web site, the place chat is out there.
Extra tales like this can be found on bloomberg.com
©2025 Bloomberg L.P.