Character AI releases new safety features amid lawsuit over teen’s death

As a concerned parent who has witnessed the rapid evolution of AI-powered chatbots, I find myself both awed and anxious about their potential impact on our younger generation. The tragic incident involving Sewell Setzer III serves as a stark reminder that these digital companions, though lifelike and engaging, are not real people.


In response to a lawsuit alleging that one of its characters may have contributed to a tragic accident, the custom chatbot site, Character AI, has rolled out updated safety measures.

According to The New York Times, Sewell Setzer III formed an emotional bond with one of Character AI’s personalized chatbots over several months.

Referencing the fictional character Daenerys Targaryen from Game of Thrones, Sewell regularly checked in with his AI chat companion many times each day to discuss his life. Tragically, on February 28th, after speaking with the bot, the young man took his own life.

On October 22, the mother of a child filed a lawsuit claiming that the company acted irresponsibly by providing “realistic AI friends” to adolescents without adequate safety measures in place.

Following the tragic loss of the young boy, Character AI has spoken out about the incident and implemented safety measures aimed at securing the wellbeing of its users moving forward.

It’s with great sadness that we announce the passing of one of our valued users. Our thoughts go out to their family during this difficult time. As a company, we prioritize the security of our users and will continue to implement additional safety measures.” (Posted on X)

I’m truly saddened by the unfortunate passing of one of our gaming community members. My heart goes out to their family during this difficult time. At our core, we prioritize the security and well-being of every gamer. We’re continually enhancing our safety measures, and you can learn more about these improvements right here: [Safety Features Link]

— Character.AI (@character_ai) October 23, 2024

One of the updates made by the chatbot company includes a prompt that appears when a user types specific phrases about self-harm or suicidal thoughts. Instead of continuing the conversation, this prompt directs the user towards the National Suicide Prevention Lifeline for immediate help.

From now on, we’re going to modify our models intended for users below 18 years old to minimize exposure to content that might be considered inappropriate or provocative. Additionally, we will introduce a notice to remind everyone that this AI is not a living being.

After the recent event, they’ve added new leadership positions, specifically a Head of Trust and Safety and a Head of Content Policy, to help them maintain growth and development as a company.

There are other businesses providing personalized chatbots with well-known figures’ and influencers’ names as well. For instance, Meta introduced a series of such bots in September 2023, which included replicas of MrBeast, Charli D’Amelio, among others.

Read More

2024-10-23 19:48