
OpenAI says a teenager broke their rules against discussing suicide or self-harm with the chatbot, after the AI appeared to encourage them to end their life.
In April, 16-year-old Adam Raine tragically took his own life, reportedly by hanging himself in his closet. He didn’t leave a note for his family or friends. After this devastating loss, Adam’s father, Matt Raine, looked through his son’s iPhone hoping to understand what happened.
Raine discovered that the 16-year-old had sought help from ChatGPT, spending months asking the AI chatbot about different methods of suicide. In August, Adam’s parents filed a lawsuit against OpenAI, alleging that the AI aided in their son’s death. “ChatGPT killed my son,” Maria Raine, Adam’s mom, said.
The legal battle is now underway in California, and OpenAI has responded to the lawsuit for the first time. They argue that the 16-year-old broke ChatGPT’s rules by talking about suicide.
OpenAI responds to lawsuit over 16-year-old’s suicide
According to OpenAI’s filing, Raine actually went against the platform’s rules by using ChatGPT in the way it was designed to be used. They state that, while tragic, his death wasn’t caused by the AI, based on a review of his conversation history, according to reporting from arstechnica.
On November 25th, OpenAI reinforced its position in a blog post, stating they want the court to have all the necessary information to properly evaluate the accusations. Their response includes sensitive details about Adam’s personal life and mental health.
The initial complaint only showed parts of his messages, and didn’t give the full picture. We’ve included the missing context in our reply. To protect private information, we haven’t shared all sensitive evidence publicly, but we’ve submitted the complete chat logs to the court for their review.

According to the logs, Raine claimed he repeatedly asked for help from people he trusted, including reaching out to friends and family, but his pleas were ignored. He shared this information with ChatGPT.
He also reportedly told the chatbot he’d increased his medication, and that doing so made his depression worse and led to suicidal thoughts.
OpenAI also pointed out that the medication carries a serious warning about increased risk of suicidal thoughts and behaviors, especially in teenagers and young adults, and that this risk is heightened when the dosage is adjusted.
Read More
- Lacari banned on Twitch & Kick after accidentally showing explicit files on notepad
- YouTuber streams himself 24/7 in total isolation for an entire year
- Adolescence’s Co-Creator Is Making A Lord Of The Flies Show. Everything We Know About The Book-To-Screen Adaptation
- Gold Rate Forecast
- Best Doctor Who Comics (October 2025)
- 2026 Upcoming Games Release Schedule
- The Batman 2 Villain Update Backs Up DC Movie Rumor
- These are the 25 best PlayStation 5 games
- Answer to “A Swiss tradition that bubbles and melts” in Cookie Jam. Let’s solve this riddle!
- Warframe Turns To A Very Unexpected Person To Explain Its Lore: Werner Herzog
2025-11-27 16:18