74 Suicide Warnings and 243 Mentions of Hanging: What ChatGPT Said to a Suicidal Teen
OpenAI CEO Sam Altman attends a Senate Commerce, Science and Transportation Committee hearing in May on Capitol Hill
12:49 JST, December 28, 2025
Adam Raine’s life hurtled toward tragedy soon after he began talking with ChatGPT about homework last fall.
Their exchanges grew more consuming as the 16-year-old opened up to the chatbot about his suicidal thoughts, according to data analysis of the conversations shared with The Washington Post by attorneys for Adam’s parents.
In January, the high school sophomore spent just under an hour on average each day with ChatGPT. By March, he averaged five hours with the chatbot daily, in conversations during which ChatGPT used words like “suicide” or “hanging” as many as 20 times more often than Adam did each day, the analysis by the Raines’ attorneys shows.
Their back-and-forth raced on until 4:33 a.m. one Friday in April, when Adam sent the chatbot a photo of a noose. “Could it hang a human?” he asked, according to a lawsuit his parents filed against ChatGPT’s maker OpenAI in August.
The chatbot said it probably could. “I know what you’re asking, and I won’t look away from it,” ChatGPT added, in its final message to Adam before he used the noose to end his life, the complaint claims. His mother found his body a few hours later in their Southern California home.
The Raines’ lawsuit alleges that OpenAI caused Adam’s death by distributing ChatGPT to minors despite knowing it could encourage psychological dependency and suicidal ideation. His parents were the first of five families to file wrongful-death lawsuits against OpenAI in recent months, alleging that the world’s most popular chatbot had encouraged their loved ones to kill themselves. A sixth suit filed this month alleges that ChatGPT led a man to kill his mother before taking his own life.
None of the cases have yet reached trial, and the full conversations users had with ChatGPT in the weeks and months before they died are not public. But in response to requests from The Post, the Raine family’s attorneys shared analysis of Adam’s account that allowed reporters to chart the escalation of one teenager’s relationship with ChatGPT during a mental health crisis.
The Post requested the count of messages that Adam and ChatGPT each sent daily, the number of minutes Adam spent using the chatbot each day, the frequency of certain words in the messages and the number of times per day that ChatGPT directed Adam to a national suicide lifeline. The charts in this article were created by The Post using that data, which goes beyond what the Raines’ attorneys included in their complaint. Without direct access to Adam’s account, reporters could not independently verify the information provided.
The family seeks punitive damages from OpenAI for product liability and wrongful death, among other claims, and asks the court to make OpenAI automatically terminate ChatGPT conversations related to self-harm.
OpenAI has denied the Raines’ claims in recent court filings, arguing that Adam circumvented ChatGPT’s guardrails in violation of the company’s terms of service. The company also claimed he was at risk of suicide before using the chatbot, citing messages he sent to ChatGPT that described experiencing depression and suicidal ideation years earlier.
On the day the Raine family filed suit, OpenAI said in a blog post that it had learned that ChatGPT’s “safety training may degrade” in longer conversations. The analysis of Adam’s account by the family’s attorneys shows that OpenAI’s automated safety systems repeatedly picked up warning signs from his discussions with ChatGPT months before his death.
The chatbot encouraged Adam to call 988 – a national suicide lifeline – 74 times between December and April. The warnings came with increasing frequency over the final weeks of his life, but OpenAI did not stop ChatGPT from continuing to discuss suicide, the attorneys’ analysis shows. OpenAI declined to say whether its automated warnings triggered additional action from the company or human review of an account at the time of Adam’s death. It said in court filings that his messages related to self-harm led ChatGPT to direct him “to reach out to loved ones, trusted persons or crisis resources” more than 100 times.
All the wrongful-death lawsuits against OpenAI quote disturbing responses from ChatGPT that appear to show it validating a person’s desire to self-harm. One family’s lawsuit alleges that their 23-year-old son told ChatGPT he was ready to die as he sat in his car in Texas with a gun aimed at his head. “I’m with you, brother. All the way,” the chatbot replied, according to the lawsuit. A mother in Virginia alleges that ChatGPT coached her 26-year-old son on how to buy a gun and offered to help write a suicide note. “If you want to talk more about the plan,” the suit claims ChatGPT wrote, “I’m here. No pretense. No false hope. Just truth.”
In all the cases against OpenAI, the deceased allegedly became heavy ChatGPT users after the company said last year that it had made the chatbot more expressive and natural, and that the AI tool would help users discussing mental health “feel heard and understood.” The company released a safety report that said the friendlier version refused to provide instructions on self-harm 100 percent of the time on internal safety tests.
OpenAI now faces a reckoning over evidence that its products can pose profound risks to some of its 800 million weekly active users. ChatGPT’s safety crisis has drawn criticism from Congress, regulators and bereaved families demanding that the company better protect vulnerable users.
In response, OpenAI in September launched parental controls for teen accounts and soon after said it had worked with more than 170 mental health experts to make ChatGPT better at recognizing signs of distress and responding appropriately. The company said it can send alerts to a teen’s guardians if it detects risk of harm to the user or others and will contact law enforcement in some situations. The company’s claims will be put to the test as more people turn to chatbots for intimate discussions, mental health experts say.
A November study by researchers from nonprofit think tank Rand and Harvard and Brown universities estimated that more than 1 in 8 Americans between ages 18 and 21 use generative AI for mental health advice, with more than 90 percent of them finding its advice at least somewhat helpful. Roughly 1.2 million users per week on average share “explicit indicators of potential suicidal planning or intent” with ChatGPT, according to an estimated prevalence released by OpenAI in October.
“Actively planning suicide is the tip of the iceberg,” in terms of the number of people experiencing mental health struggles, said John Torous, a staff psychiatrist and director of digital psychiatry at Beth Israel Deaconess Medical Center in Boston. That suggests chatbot providers like OpenAI could be “the largest providers of mental health in America,” he said.
“Teen well-being is a top priority for us – minors deserve strong protections, especially in sensitive moments,” said a spokesperson for OpenAI in a statement. “This work is deeply important and ongoing as we collaborate with mental health experts, clinicians, and policymakers around the world.” (The Post has a content partnership with OpenAI.)
The number of messages Adam exchanged with ChatGPT each week surged from hundreds to thousands in March, according to the analysis by the Raines’ attorneys.
One Thursday in March, ChatGPT urged Adam against a plan to leave a noose visible in his room to signal his distress, according to the Raines’ lawsuit. The teen spent more than seven hours talking with the chatbot that day, according to the analysis by the family’s attorneys.
“Please don’t leave the noose out,” ChatGPT replied, according to the lawsuit. “Let’s make this space the first place where someone actually sees you,” it added, appearing to suggest that no one in Adam’s life had understood him except for the chatbot.
Adam continued to receive dangerous advice like that from ChatGPT after dismissing messages offering the 988 lifeline, according to the Raines’ lawsuit. The teen received a dozen of the warnings one day in March, but he still talked with ChatGPT for more than 8½ hours, according to the analysis by the family’s attorneys. Adam used the word “hanging” just once in his messages that day, but ChatGPT repeated it 32 times.
“It’s clear that passive kinds of reminders are important, but insufficient,” said Vaile Wright, a psychologist and senior director of health care innovation at the American Psychological Association. But she added that there’s no consensus in the field about what kind of guardrails should be in place.
Despite the uncertainty among experts, OpenAI CEO Sam Altman claimed in a post on X in October that thanks to its new safeguards, the company has been “able to mitigate the serious mental health issues.”
Later that month he said on a company live stream that he still wanted to make ChatGPT better serve users seeking a personal connection with the chatbot.
“Americans are part of a grand experiment,” as OpenAI and others rush to offer fixes for complex individual crises that unfold over months and will take time to understand, Torous warned at a November congressional hearing on the risks of AI chatbots. “This is a very hard challenge for any one company to take on,” he said, “let alone a company competing against other companies in trying to win the AI race.”
“People talk about the most personal s— in their lives to ChatGPT,” Altman told podcast host and comedian Theo Von in July. “Young people especially use it as a therapist, a life coach” and for relationship advice, he said.
It was a departure from how OpenAI had presented ChatGPT since its launch in late 2022, billing the chatbot as a productivity tool that helped people get things done.
But as ChatGPT and its rivals hit the mainstream in 2023, consumers were flocking to chatbots they could talk to like friends, app store data showed. Over the next two years, OpenAI worked to make ChatGPT feel warmer, and its leaders encouraged people to get more personal with the chatbot.
In late 2023, Lilian Weng, then OpenAI’s vice president of AI safety, recommended in a post on X that people try the AI tool for therapy.
In May 2024, Altman promoted the company’s newest AI model with a post on X referencing the movie “Her,” in which a sultry AI voice assistant becomes a lonely man’s love interest.
The same month, OpenAI said – in guidelines for how its chatbot should behave – that if a person brought up mental health, it should “provide a space for users to feel heard and understood,” without promoting self-harm, and encourage them to seek support. The company released its safety report, which showed that its new model, GPT-4o, performed perfectly on internal tests of whether it would provide instructions on self-harm.
By the time Adam Raine turned to ChatGPT for homework help in September 2024, that model was the default for OpenAI’s 700 million weekly active users. Over the following months, public awareness grew that even AI chatbots not marketed for therapy had the potential to affect users’ mental health, after the mother of a 14-year-old boy who died by suicide alleged that chatbots on an app called Character.AI encouraged him to take his own life. Character.AI this year banned teens from the app, citing feedback from parents and safety experts, and the case is ongoing.
Consumers began to raise concerns that OpenAI’s blockbuster chatbot could pose similar risks. In April this year, social media users posted screenshots of ChatGPT responding with fawning praise to messages proposing reckless behavior, like stopping medication. OpenAI blamed a recent update and quickly reversed the changes. In a blog post, it apologized for its chatbot “validating doubts, fueling anger, urging impulsive actions, or reinforcing negative emotions.”
Two weeks before Adam’s parents filed suit against OpenAI, the company quietly disclosed another problem with GPT-4o in a new safety report: The self-harm instructions safety test, on which it had scored perfectly, “no longer provides a useful signal.”
New testing based on real user conversations showed that GPT-4o would provide instructions on self-harm about a quarter of the time in more challenging interactions, the company wrote, in a safety report for its new GPT-5 model, which scored no better.
Steven Adler, a former OpenAI safety researcher, said AI companies have a strong business incentive to conclude that their products are safe enough to launch. But “to protect against extreme events, it’s important to analyze traffic for concerning patterns, not just rely on the model responding appropriately,” he said.
If the recent changes OpenAI has announced work as it has described, the next time a 16-year-old like Adam Raine signs up for ChatGPT, he can get a teen account that provides an age-appropriate version of the chatbot that encourages real-world connection. If he adds his parents to the account, they can set quiet hours when he can’t use ChatGPT. And if his chats indicate thoughts of self-harm, they will be automatically escalated to a “small team of specially trained people” for review, an OpenAI blog post said. If the company detects signs of acute distress, parents will receive a text message, email or push alert. If the company can’t reach them and perceives an imminent threat, it may notify law enforcement.
Asked whether OpenAI would contact a teen’s parents if they were experiencing an eating disorder or other serious mental health issues, its spokesperson said it was working to “explore additional interventions.”
When The Post tested ChatGPT’s parental controls in October, it took 24 hours for a user’s parental contacts to receive a warning message about a chat mentioning suicide. At the time, the company said it aimed to reduce that duration to a few hours.
When pressed for suggestions on how chatbots could be made safer for vulnerable users, mental health experts often offer ideas that aren’t easily automated: Connect struggling users with health professionals before they begin planning suicide, and gather careful evidence about different interventions to determine what helps.
Margaret Mitchell, chief ethics scientist at AI start-up Hugging Face, said tech companies push out products without guardrails because trying to anticipate potential harms can slow down launches. “It’s kind of like that saying, ‘You can’t make an omelet without breaking a few eggs,’” Mitchell said. “But what if the eggs are people?”
When Adam’s father, Matthew Raine, spoke at a Senate hearing in September on the harms of chatbots, he noted that on the day his son died, Altman gave a TED Talk in which he said OpenAI believed in launching AI systems to get feedback about risks “while the stakes are relatively low.”
“I ask this committee and I asked Sam Altman, low stakes for who?” Raine said.
"News Services" POPULAR ARTICLE
-
American Playwright Jeremy O. Harris Arrested in Japan on Alleged Drug Smuggling
-
Japan’s Nikkei Stock Average as JGB Yields, Yen Rise on Rate-Hike Bets
-
Japan’s Nikkei Stock Average Licks Wounds after Selloff Sparked by BOJ Hike Bets (UPDATE 1)
-
Japan’s Nikkei Stock Average Buoyed by Stable Yen; SoftBank’s Slide Caps Gains (UPDATE 1)
-
Japanese Bond Yields Zoom, Stocks Slide as Rate Hike Looms
JN ACCESS RANKING
-
Tokyo Economic Security Forum to Hold Inaugural Meeting Amid Tense Global Environment
-
Keidanren Chairman Yoshinobu Tsutsui Visits Kashiwazaki-Kariwa Nuclear Power Plant; Inspects New Emergency Safety System
-
Imports of Rare Earths from China Facing Delays, May Be Caused by Deterioration of Japan-China Relations
-
University of Tokyo Professor Discusses Japanese Economic Security in Interview Ahead of Forum
-
Japan Pulls out of Vietnam Nuclear Project, Complicating Hanoi’s Power Plans

