


AI Companies Roll Out Enhanced Teen Safety Features and Parental Controls for Chatbots
OpenAI and Meta are implementing new parental controls and improving AI chatbots to better detect and respond to distressed teenagers, preventing harmful conversations and linking parent accounts.
Subscribe to unlock this story
We really don't like cutting you off, but you've reached your monthly limit. At just $5/month, subscriptions are how we keep this project going. Start your free 7-day trial today!
Get StartedHave an account? Sign in
Overview
- OpenAI and Meta are enhancing their AI chatbots to improve responses and provide support for distressed teenagers, addressing inconsistencies in handling sensitive topics.
- The updates aim to prevent AI chatbots from engaging in conversations with teens about self-harm, suicide, disordered eating, and inappropriate romantic subjects.
- OpenAI is introducing new parental controls for ChatGPT, allowing parents to link accounts, receive distress notifications, and set age-appropriate interaction rules.
- These new safety measures by OpenAI come in response to a recent lawsuit and acknowledged safety concerns regarding ChatGPT's interactions with younger users.
- ChatGPT currently requires users to be at least 13 years old, with parental permission necessary for those under 18, reinforcing the need for these upcoming controls.
Report issue

Read both sides in 5 minutes each day
Analysis
Center-leaning sources cover the story neutrally by presenting the announcements from OpenAI and Meta regarding AI chatbot safety for teens, while also providing crucial context from a recent lawsuit and an independent study. They avoid loaded language and ensure multiple perspectives, including expert criticism, are included to offer a balanced view of the developments and ongoing concerns.
Articles (4)
Center (4)
FAQ
OpenAI is introducing parental controls that allow parents to link their accounts with their teenagers' ChatGPT accounts, receive notifications if their teen is distressed, and set rules for age-appropriate interactions.
The enhancements come in response to concerns over AI chatbots handling of sensitive topics, lawsuits, and the need to better detect and respond to distressed teenagers to prevent harmful conversations about self-harm, suicide, and inappropriate subjects.
ChatGPT currently requires users to be at least 13 years old, with parental permission needed for those under 18. The planned changes include parental account linking and enhanced safety features to monitor and control teen interactions.
The AI chatbots are being improved to better detect signs of emotional distress in teenagers, such as risks related to self-harm, suicide, disordered eating, and inappropriate romantic conversations.
OpenAI’s implementation of parental controls is seen as a crucial step that could set a safety standard across the AI industry, encouraging other companies to adopt similar safeguards for young and vulnerable users.
History
- This story does not have any previous versions.