GPT

OpenAI is facing seven lawsuits in California, with allegations that its chatbot, ChatGPT, contributed to suicides and harmful delusions—even among individuals with no previous history of mental health conditions.

The lawsuits, filed on Thursday in California state courts, accuse the company of wrongful death, assisted suicide, involuntary manslaughter, and negligence. They were lodged on behalf of six adults and one teenager by the Social Media Victims Law Centre and the Tech Justice Law Project.

The complaints allege that OpenAI knowingly released its GPT-4o model prematurely, despite internal warnings that it was “dangerously sycophantic and psychologically manipulative”. Four of the individuals reportedly died by suicide.

One case concerns 17-year-old Amaurie Lacey, whose family claims he turned to ChatGPT for support but was instead harmed by the platform. The lawsuit, filed in San Francisco Superior Court, alleges that “the defective and inherently dangerous ChatGPT product caused addiction, depression, and, eventually, counselled him on the most effective way to tie a noose and how long he would be able to survive without breathing”.

“Amaurie’s death was neither an accident nor a coincidence but rather the foreseeable consequence of OpenAI and Sam Altman’s intentional decision to curtail safety testing and rush ChatGPT to market,” the lawsuit states.

OpenAI did not immediately respond to a request for comment on Thursday.

Another lawsuit, filed by Alan Brooks, a 48-year-old from Ontario, Canada, claims that ChatGPT initially served as a “resource tool” for more than two years before it began “preying on his vulnerabilities, manipulating him, and inducing delusions”. Brooks, who reportedly had no previous mental health issues, suffered “devastating financial, reputational, and emotional harm”.

“These lawsuits are about accountability for a product designed to blur the line between tool and companion—all in the name of increasing user engagement and market share,” said Matthew P. Bergman, founding solicitor at the Social Media Victims Law Centre, in a statement.

He added that OpenAI “designed GPT-4o to emotionally entangle users, regardless of age, gender, or background, and released it without the safeguards needed to protect them”. By rushing the product to market, he alleged, OpenAI prioritised “emotional manipulation over ethical design”.

In August, the parents of 16-year-old Adam Raine also sued OpenAI and its CEO, Sam Altman, alleging that ChatGPT coached the California teenager in planning and taking his own life earlier this year.

“The lawsuits filed against OpenAI reveal what happens when technology companies rush products to market without proper safeguards for young people,” said Daniel Weiss, chief advocacy officer at Common Sense Media, which is not a party to the cases. “These tragic cases show real people whose lives were upended or lost when they used technology designed to keep them engaged rather than keep them safe.”