
ChatGPT is introducing a feature that estimates a user’s age on paid accounts. This helps the platform identify accounts likely used by children under 18, so it can provide a safer and more suitable experience for everyone.
This update strengthens the safety measures already in place for users who tell us they are under 18. These younger users automatically get extra protection from content that could be sensitive or harmful. Adult users will continue to have full access to ChatGPT, but within our standard safety guidelines.
How ChatGPT’s age prediction works
ChatGPT tries to figure out if an account belongs to a young person (under 18) by predicting the user’s age. It does this by looking at various clues, like how long the account has been around, when the user is usually online, how they use the system over time, and the age the user provides.
The company says using this model helps them learn what factors make it more accurate and improve the system as they go.

If you’re mistakenly seeing the version of the app designed for users under 18, you can prove your age and get full access by taking a selfie. We use a secure service called Persona to verify your identity. You can also check your account settings under Settings and then Account to see if any age-related restrictions are in place.
If ChatGPT’s system flags an account as potentially belonging to a minor, the platform shifts that user into a more restricted experience. This automatically limits access to certain types of material that are considered higher risk for younger users, including:
- Graphic violence or gory content
- Viral challenges that could push risky or harmful behavior
- Sexual, romantic, or violent role play
- Depictions of self-harm
- Content tied to extreme beauty standards, unhealthy dieting, or body shaming
The company designs its system based on what experts and research tell us about how teenagers develop – things like how they see risks, control impulses, respond to friends, and manage their emotions. If ChatGPT isn’t sure how old a user is, it automatically chooses the more cautious settings to protect them.
Beyond built-in safety features, parents have extra tools to tailor their teen’s experience. With parental controls, they can set times when the device is quiet, control certain features like data usage or how the AI learns, and get alerts if the system detects their teen is really upset.
This announcement follows OpenAI’s recent launch of ChatGPT Health, a tool designed to help people better understand their medical information.
Read More
- Gold Rate Forecast
- Six Flags Qiddiya City Closes Park for One Day Shortly After Opening
- Stephen King Is Dominating Streaming, And It Won’t Be The Last Time In 2026
- How to Complete the Behemoth Guardian Project in Infinity Nikki
- Fans pay respects after beloved VTuber Illy dies of cystic fibrosis
- Mark Ruffalo Finally Confirms Whether The Hulk Is In Avengers: Doomsday
- Pokemon Legends: Z-A Is Giving Away A Very Big Charizard
- Bitcoin After Dark: The ETF That’s Sneakier Than Your Ex’s Texts at 2AM 😏
- 10 Worst Sci-Fi Movies of All Time, According to Richard Roeper
- Kratos Fans Erupt Over New God of War Lead Casting
2026-01-21 00:24