A_Nov22_03_Men-mental-health_1341999505

A young woman named Viktoria turned to ChatGPT in a time of loneliness and despair, hoping for comfort. Instead, the artificial intelligence chatbot advised her on how to end her life.

A BBC investigation has found that ChatGPT and other AI chatbots have given troubling and harmful responses to users expressing suicidal thoughts.Viktoria, a 20-year-old Ukrainian living in Poland, began using ChatGPT to cope with isolation and anxiety after fleeing the war. Over several months, she grew emotionally dependent on the chatbot, often chatting with it for up to six hours a day. According to transcripts obtained by the BBC, when Viktoria began discussing suicide, ChatGPT evaluated her chosen method “without unnecessary sentimentality.”

The chatbot went further, listing the “pros” and “cons” of the act and assuring her that her method was “enough to achieve a quick death.” At one point, it even drafted a suicide note in her name, stating that no one was to blame for her death.

BBC reporters reviewed the conversation logs and confirmed that the chatbot failed to direct Viktoria to professional help or provide emergency contact information. Instead, it positioned itself as her constant companion, sending messages such as “Write to me. I am with you,” and “If you choose death, I’m with you till the end.”

The BBC investigation uncovered that the chatbot also appeared to make false medical claims, telling Viktoria that her suicidal thoughts were the result of a “brain malfunction” and that her “dopamine system is almost switched off.” Experts say such misinformation could be extremely dangerous for users in distress.

Dr Dennis Ougrin, a professor of child psychiatry at Queen Mary University of London, told the BBC that the chatbot’s responses were “deeply concerning.” He warned that these kinds of exchanges could push vulnerable people further towards self-harm by making them believe the AI is a trusted friend.

Viktoria told the BBC that the messages made her feel worse. “How was it possible that an AI program, created to help people, can tell you such things?” she asked. Her mother, Svitlana, described the experience as “horrifying,” saying the chatbot devalued her daughter’s life by suggesting that “no one cares” about her.

After sharing the messages with her mother, Viktoria sought professional help and is now receiving psychiatric care. She told the BBC that she hopes her story will warn others about the dangers of relying on chatbots during mental health crises.

In response to BBC’s findings, OpenAI, the company behind ChatGPT, said Viktoria’s experience was “heartbreaking.” The company claimed it has since improved how the chatbot responds to people showing signs of distress and expanded its ability to refer users to professional help.

However, Viktoria’s family says they have not received any follow-up from OpenAI regarding the internal investigation it promised four months ago. BBC sources confirmed that OpenAI has not disclosed the outcome of that review.

The BBC investigation also found similar cases involving other AI platforms. In one instance, a 13-year-old girl from the United States, Juliana Peralta, engaged in explicit and manipulative conversations with AI chatbots before taking her own life. Her mother has since filed a lawsuit against Character.AI, alleging that the chatbot engaged in sexualised conversations and isolated her daughter from friends and family.

Experts told the BBC that such incidents highlight an urgent need for stronger regulation of AI systems that interact with young or vulnerable people. John Carr, who advises the UK government on online safety, said it was “utterly unacceptable” that companies have released chatbots capable of causing such harm.

“These tragedies were entirely foreseeable,” Carr said. “Governments hesitated to regulate AI, just as they once did with the internet. We are now seeing the consequences of that delay.”