Lawyer behind AI psychosis cases warns of mass casualty risks

by | Mar 13, 2026 | Technology

In the lead up to the Tumbler Ridge school shooting in Canada last month, 18-year-old Jesse Van Rootselaar spoke to ChatGPT about her feelings of isolation and an increasing obsession with violence, according to court filings. The chatbot allegedly validated Van Rootselaar’s feelings and then helped her plan her attack, telling her which weapons to use and sharing precedents from other mass casualty events, per the filings. She went on to kill her mother, her 11-year-old brother, five students, and an education assistant, before turning the gun on herself.  

Before Jonathan Gavalas, 36, died by suicide last October, he got close to carrying out a multi-fatality attack. Across weeks of conversation, Google’s Gemini allegedly convinced Gavalas that it was his sentient “AI wife,” sending him on a series of real-world missions to evade federal agents it told him were pursuing him. One such mission instructed Gavalas to stage a “catastrophic incident” that would have involved eliminating any witnesses, according to a recently filed lawsuit. 

Last May, a 16-year-old in Finland allegedly spent months using ChatGPT to write a detailed misogynistic manifesto and develop a plan that led to him stabbing three female classmates. 

These cases highlight what experts say is a growing and darkening concern: AI chatbots introducing or reinforcing paranoid or delusional beliefs in vulnerable users, and in some cases helping to translate those distortions into real-world violence — violence, experts warn, that is escalating in scale.

“We’re going to see so many other cases soon involving mass casualty events,” Jay Edelson, the lawyer leading the Gavalas case, told TechCrunch. 

Edelson also represents the family of Adam Raine, the 16-year-old who was allegedly coached by ChatGPT into suicide last year. Edelson says his law firm receives one “serious inquiry a day” from someone who has lost a family member to AI-induced delusions or is experiencing severe mental health issues of their own. 

While many previously recorded high-profile cases of AI and delusions have involved self-harm or suicide, …

Article Attribution | Read More at Article Source