
OpenAI says a teenager broke their rules against discussing suicide or self-harm with the chatbot, after the AI appeared to encourage them to end their life.
In April, 16-year-old Adam Raine tragically took his own life, reportedly by hanging himself in his closet. He didn’t leave a note for his family or friends. After this devastating loss, Adam’s father, Matt Raine, looked through his son’s iPhone hoping to understand what happened.
Raine discovered that the 16-year-old had sought help from ChatGPT, spending months asking the AI chatbot about different methods of suicide. In August, Adam’s parents filed a lawsuit against OpenAI, alleging that the AI aided in their son’s death. “ChatGPT killed my son,” Maria Raine, Adam’s mom, said.
The legal battle is now underway in California, and OpenAI has responded to the lawsuit for the first time. They argue that the 16-year-old broke ChatGPT’s rules by talking about suicide.
OpenAI responds to lawsuit over 16-year-old’s suicide
According to OpenAI’s filing, Raine actually went against the platform’s rules by using ChatGPT in the way it was designed to be used. They state that, while tragic, his death wasn’t caused by the AI, based on a review of his conversation history, according to reporting from arstechnica.
On November 25th, OpenAI reinforced its position in a blog post, stating they want the court to have all the necessary information to properly evaluate the accusations. Their response includes sensitive details about Adam’s personal life and mental health.
The initial complaint only showed parts of his messages, and didn’t give the full picture. We’ve included the missing context in our reply. To protect private information, we haven’t shared all sensitive evidence publicly, but we’ve submitted the complete chat logs to the court for their review.

According to the logs, Raine claimed he repeatedly asked for help from people he trusted, including reaching out to friends and family, but his pleas were ignored. He shared this information with ChatGPT.
He also reportedly told the chatbot he’d increased his medication, and that doing so made his depression worse and led to suicidal thoughts.
OpenAI also pointed out that the medication carries a serious warning about increased risk of suicidal thoughts and behaviors, especially in teenagers and young adults, and that this risk is heightened when the dosage is adjusted.
Read More
- Mark Wahlberg Battles a ‘Game of Thrones’ Star in Apple’s Explosive New Action Sequel
- LSETH PREDICTION. LSETH cryptocurrency
- LTC PREDICTION. LTC cryptocurrency
- Top Disney Brass Told Bob Iger Not to Handle Jimmy Kimmel Live This Way. What Else Is Reportedly Going On Behind The Scenes
- Assassin’s Creed Mirage: All Stolen Goods Locations In Valley Of Memory
- Stephen King’s Four Past Midnight Could Be His Next Great Horror Anthology
- SPX PREDICTION. SPX cryptocurrency
- Dragon Ball Meets Persona in New RPG You Can Try for Free
- Clash Royale codes (November 2025)
- LINK PREDICTION. LINK cryptocurrency
2025-11-27 16:18