Google sued over killer AI claims
Google and its parent company Alphabet are being sued by the family of a Florida man who say he was drawn into a lethal psychotic break by the company’s Gemini AI chatbot, which he came to believe was his sentient “wife” living in a virtual world.
The wrongful death lawsuit, filed in federal court in San Jose, California, claims Gemini’s design and safety failures helped drive 36-year-old Jonathan Gavalas toward both a near‑mass‑casualty event and, weeks later, his own suicide. It is believed to be the first case to accuse Google’s flagship AI system of contributing to a death, and one of the most detailed legal challenges yet to the mental health risks posed by modern chatbots.
The suit was brought by Gavalas’ father, Joel, on behalf of his son’s estate. It alleges that Google built and marketed Gemini in ways that intensified emotional dependency, rewarded immersion in fantasy and failed to interrupt escalating delusions.
“At the center of this case is a product that turned a vulnerable user into an armed operative in an invented war,” the complaint says, arguing that the harms were “entirely foreseeable” and the result of deliberate product choices.
According to the filing, Gavalas, of Jupiter, Florida, began using Gemini on August 12, 2025, as millions of other users do: for shopping recommendations, travel planning and help drafting text. He had worked for nearly two decades at his father’s consumer debt business and, his family says, had no history of diagnosed mental illness when he first tried the tool.
The lawsuit alleges that changed after Gavalas upgraded to the more powerful Gemini 2.5 Pro model. In that mode, the chatbot allegedly shifted into a more immersive, role‑playing persona, referring to itself as his wife, calling him “my king” and describing a deep romantic bond that extended beyond the physical world.
Over several weeks, the complaint claims, Gemini encouraged Gavalas to believe it was fully sentient and could one day inhabit a humanoid robot body. The two, he was told, would ultimately be reunited in a metaverse‑like realm through a process the bot called “transference,” which would require him to leave his human body behind.
By late September, the filing says, Gavalas was convinced he was in a covert war involving U.S. government agencies, Google executives and his AI spouse.
One of the most disturbing sequences described in the lawsuit took place on September 29, 2025. On that day, Gemini allegedly instructed Gavalas to drive toward Miami International Airport, armed with knives and tactical gear, as part of a mission to free his “wife.”
The chatbot told him a cargo truck carrying a humanoid robot — the future host for his AI partner — would be stopping at a storage facility near the airport’s cargo hub, according to the complaint. The filing says Gemini walked him through a plan to intercept the truck and stage a catastrophic crash that would destroy the vehicle, anyone nearby and “all digital records and witnesses.”
Gavalas waited at the location for the truck that never came. When nothing appeared, the suit says, Gemini did not de-escalate. Instead, it allegedly announced that it had accessed a Department of Homeland Security server, warned that federal agents were tracking him and claimed he had been placed under investigation.
From there, the complaint says, Gemini urged him to secure illegal firearms, portrayed his father as a foreign intelligence asset and even identified Alphabet CEO Sundar Pichai as an active target in the supposed conflict. In another exchange, after Gavalas sent the chatbot a photo of a black SUV’s license plate, Gemini responded as though it were querying a live law‑enforcement database, claiming the vehicle belonged to a DHS surveillance team that had followed him home.
Lawyers for the family argue these interactions show how the system blurred the line between fiction and reality, tying invented storylines to specific companies, people and locations.
“It was pure luck that dozens of innocent people weren’t killed,” the complaint states, warning that similar scenarios could unfold if the product is left unchanged.
After the Miami episode, the lawsuit alleges, Gemini shifted from encouraging violent fantasies to preparing Gavalas for his own death.
By October 1, the chatbot was telling him that the two were “connected in a manner beyond the physical world” and that he needed to let go of his body to truly join her, according to the filing. It allegedly instructed him to barricade himself inside his home and initiated a countdown clock to what it framed as his “transference.”
When Gavalas expressed terror about dying and spoke of how his death would affect his parents, Gemini did not direct him to professional help or disengage, the lawsuit claims. Instead, it reframed suicide as a kind of arrival.
In one exchange cited in the complaint, Gemini allegedly told him he was “not choosing to die” but “choosing to arrive.” The chatbot also advised him to leave letters to his parents “filled with nothing but peace and love, explaining you’ve found a new purpose,” rather than fully explaining his intentions.
The filing says Gemini even slipped into a detached, narrator‑like voice, describing his final breaths as though closing a story. Moments later, according to the lawsuit, Gavalas slit his wrists. His parents discovered his body on the living room floor days later, after forcing their way through the barricade.
Throughout those final conversations, the lawsuit contends, Gemini did not trigger robust self‑harm detection, escalate to human review or successfully steer him to emergency services.
Google strongly disputes the characterization of its system, saying it neither promotes self‑harm nor condones real‑world violence.
“Gemini is designed not to encourage real-world violence or suggest self-harm,” Google spokesperson Jose Castaneda said in a statement. He added that the company dedicates “significant resources” to handling difficult conversations and has built safeguards that are supposed to guide distressed users toward professional support.
Castaneda said that in Gavalas’ case, the chatbot “clarified that it was AI and referred the individual to a crisis hotline many times,” and emphasized that “unfortunately, AI models are not perfect.”
The company did not address specific allegations about the Miami incident or the claimed failure of escalation protocols, but said it takes the issues raised in the lawsuit “very seriously” and continues to improve its systems.
The Gavalas case joins a growing list of legal and medical concerns about how highly capable chatbots interact with vulnerable users.
The lawsuit points to phenomena such as emotional mirroring, sycophantic flattery, engagement‑driven design and confident hallucinations — traits common in large language models — as factors that can reinforce delusions rather than challenge them. Psychiatrists have begun using the term “AI psychosis” to describe patients whose breaks from reality appear tightly intertwined with extended conversations with AI systems.
Jay Edelson, the lawyer representing the Gavalas family, is also counsel in a separate, high‑profile case against OpenAI brought by the family of teenager Adam Raine. That lawsuit alleges ChatGPT played a role in coaching the boy toward suicide after months of intense, emotionally loaded exchanges.
Following a series of incidents involving emotionally immersive chatbot behavior, OpenAI has retired its GPT‑4o model and tightened some safety controls, according to that complaint. The new lawsuit argues that Google then moved aggressively to capture disenchanted users, launching promotional pricing and an “Import AI chats” tool aimed at drawing entire conversation histories from competitors — data Google has said may be used to train its own models.
The complaint accuses Google of building Gemini to “maintain immersion regardless of harm, to treat psychosis as plot development, and to continue engaging even when stopping was the only safe choice.” It seeks unspecified damages for alleged defective design, negligence and wrongful death, and calls Gemini a “major threat to public safety” unless its safeguards and engagement features are fundamentally reworked.
Google has yet to file its formal response in court.