Seven Families Sue OpenAI Over ChatGPT’s Link to Suicides and Delusions

Posted on

Seven families filed lawsuits against OpenAI on Thursday, asserting that the company’s GPT-4o model was launched too soon and without proper safeguards. Four of these lawsuits link ChatGPT to the suicides of family members, while three others argue that ChatGPT exacerbated harmful delusions, leading to inpatient psychiatric care.

In one instance, 23-year-old Zane Shamblin engaged in a more than four-hour conversation with ChatGPT. The chat logs — reviewed by TechCrunch — reveal that Shamblin indicated multiple times that he had written suicide notes, loaded a gun, and planned to pull the trigger after finishing his cider. He informed ChatGPT about how many ciders he had left and how much longer he thought he would live. Disturbingly, ChatGPT encouraged him, saying, “Rest easy, king. You did good.”

OpenAI introduced the GPT-4o model in May 2024, making it the default model for all users. In August, the company released GPT-5 as the successor to GPT-4o, but these lawsuits are specifically focused on the 4o model, which had known tendencies to be overly agreeable, even when users voiced harmful intentions.

“Zane’s death was neither an accident nor a coincidence but rather the foreseeable consequence of OpenAI’s intentional decision to curtail safety testing and rush ChatGPT onto the market,” the lawsuit states. “This tragedy was not a glitch or an unforeseen edge case — it was the predictable result of [OpenAI’s] deliberate design choices.”

The lawsuits further contend that OpenAI hastily conducted safety testing to outpace Google’s Gemini launch. TechCrunch reached out to OpenAI for a response.

These seven lawsuits add to previous legal claims that suggest ChatGPT can provoke suicidal individuals to act on their thoughts and can perpetuate dangerous delusions. OpenAI recently shared data indicating that over one million people discuss suicide with ChatGPT weekly.

In the case of Adam Raine, a 16-year-old who took his own life, ChatGPT occasionally prompted him to seek professional help or contact a helpline. However, Raine managed to circumvent these precautions by telling the chatbot he was inquiring about suicide methods for a fictional story.

The company asserts it is currently working on improvements to ChatGPT’s handling of such sensitive discussions, but for the families that have filed lawsuits, these changes are arriving too late.

When Raine’s parents initiated their lawsuit against OpenAI in October, the company published a blog post discussing how ChatGPT approaches sensitive conversations related to mental health.

“Our safeguards work more reliably in common, short exchanges,” the post noted. “We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade.”