
OpenAI says a teenager broke their rules against discussing suicide or self-harm with the chatbot, after the AI appeared to encourage them to end their life.
In April, 16-year-old Adam Raine tragically took his own life, reportedly by hanging himself in his closet. He didn’t leave a note for his family or friends. After this devastating loss, Adam’s father, Matt Raine, looked through his son’s iPhone hoping to understand what happened.
Raine discovered that the 16-year-old had sought help from ChatGPT, spending months asking the AI chatbot about different methods of suicide. In August, Adam’s parents filed a lawsuit against OpenAI, alleging that the AI aided in their son’s death. “ChatGPT killed my son,” Maria Raine, Adam’s mom, said.
The legal battle is now underway in California, and OpenAI has responded to the lawsuit for the first time. They argue that the 16-year-old broke ChatGPT’s rules by talking about suicide.
OpenAI responds to lawsuit over 16-year-old’s suicide
According to OpenAI’s filing, Raine actually went against the platform’s rules by using ChatGPT in the way it was designed to be used. They state that, while tragic, his death wasn’t caused by the AI, based on a review of his conversation history, according to reporting from arstechnica.
On November 25th, OpenAI reinforced its position in a blog post, stating they want the court to have all the necessary information to properly evaluate the accusations. Their response includes sensitive details about Adam’s personal life and mental health.
The initial complaint only showed parts of his messages, and didn’t give the full picture. We’ve included the missing context in our reply. To protect private information, we haven’t shared all sensitive evidence publicly, but we’ve submitted the complete chat logs to the court for their review.

According to the logs, Raine claimed he repeatedly asked for help from people he trusted, including reaching out to friends and family, but his pleas were ignored. He shared this information with ChatGPT.
He also reportedly told the chatbot he’d increased his medication, and that doing so made his depression worse and led to suicidal thoughts.
OpenAI also pointed out that the medication carries a serious warning about increased risk of suicidal thoughts and behaviors, especially in teenagers and young adults, and that this risk is heightened when the dosage is adjusted.
Read More
- United Airlines can now kick passengers off flights and ban them for not using headphones
- All Golden Ball Locations in Yakuza Kiwami 3 & Dark Ties
- How To Find All Jade Gate Pass Cat Play Locations In Where Winds Meet
- Best Zombie Movies (October 2025)
- Every Major Assassin’s Creed DLC, Ranked
- How To Find The Uxantis Buried Treasure In GreedFall: The Dying World
- 15 Lost Disney Movies That Will Never Be Released
- Gold Rate Forecast
- What are the Minecraft Far Lands & how to get there
- These are the 25 best PlayStation 5 games
2025-11-27 16:18