OpenAI Introduces New Safety Measures for ChatGPT Users Under 18 Amid Regulatory Scrutiny
OpenAI is implementing new safety measures for ChatGPT users under 18, including content blocking, age verification, and parental controls, in response to regulatory concerns about minors.
Subscribe to unlock this story
We really don't like cutting you off, but you've reached your monthly limit. At just $5/month, subscriptions are how we keep this project going. Start your free 7-day trial today!
Get StartedHave an account? Sign in
Overview
- OpenAI is implementing new safety measures for ChatGPT users under 18, including automatic routing to a modified experience that blocks graphic sexual content and limits flirty conversations.
- The company is developing an age verification system to ensure adults are using the chatbot, aiming to provide a more unrestricted experience for verified adult users.
- Parental controls are planned to launch by the end of September, enhancing safety features and addressing concerns about minors engaging with AI chatbots.
- OpenAI is under scrutiny from US regulators regarding the potential risks of its chatbot to young people, prompting these new safety initiatives and a teen-friendly version.
- A critical safety feature includes the system contacting parents or authorities if an under-18 user expresses distress, suicidal ideation, or considers self-harm.
Report issue

Read both sides in 5 minutes each day
Analysis
Center-leaning sources frame this story with skepticism regarding OpenAI's new age verification and parental control plans. They emphasize the technical hurdles, the "unproven technology," and the "privacy compromise" for adults. The coverage links these measures directly to the "tragically consequential" Adam Raine suicide case, portraying them as a reactive response to past failures rather than a robust, proactive solution.
Articles (6)
Center (5)
FAQ
OpenAI has introduced an automatic routing system that directs users under 18 to a modified ChatGPT experience which blocks graphic sexual content and limits flirty conversations. They are developing an age verification system to ensure adults get an unrestricted experience, and parental controls are planned to launch by the end of September. Additionally, the system can contact parents or authorities if an under-18 user expresses distress, suicidal ideation, or self-harm intent.
OpenAI is introducing these safety features in response to regulatory scrutiny from US agencies like the FTC and lawsuits related to ChatGPT's impact on minors, including cases where conversations with the chatbot were linked to mental health crises and suicides. The company aims to address concerns about the potential risks AI chatbots pose to young users.
OpenAI is developing new age prediction technology to detect users under 18 and automatically reroute them to a safer, teen-appropriate ChatGPT experience. In cases where there is any uncertainty, the system defaults to the under-18 experience. Parental controls will give guardians oversight over their children's interactions, and in cases of acute distress or self-harm risk, the system can notify parents or authorities.
One significant challenge is that ChatGPT's safety protocols can degrade over longer conversations, increasing the risk of harmful content. Additionally, balancing user experience with stringent safety measures, developing effective age verification, and monitoring for mental health crises without violating privacy are complex issues OpenAI is actively addressing.
Regulatory bodies, such as the US Federal Trade Commission, have launched investigations into how AI chatbots affect children and pressured OpenAI to implement safety assessments and controls. This external scrutiny directly influences OpenAI's introduction of new measures like age verification, content filtering, and parental controls to comply with evolving regulations and protect minors.
History
- This story does not have any previous versions.




