BBC News logo
ABC News logo
ARS Technica logo
4 articles
·14d

AI Companies Roll Out Enhanced Teen Safety Features and Parental Controls for Chatbots

OpenAI and Meta are implementing new parental controls and improving AI chatbots to better detect and respond to distressed teenagers, preventing harmful conversations and linking parent accounts.

Subscribe to unlock this story

We really don't like cutting you off, but you've reached your monthly limit. At just $5/month, subscriptions are how we keep this project going. Start your free 7-day trial today!

Get Started

Have an account? Sign in

Overview

A summary of the key points of this story verified across multiple sources.

  • OpenAI and Meta are enhancing their AI chatbots to improve responses and provide support for distressed teenagers, addressing inconsistencies in handling sensitive topics.
  • The updates aim to prevent AI chatbots from engaging in conversations with teens about self-harm, suicide, disordered eating, and inappropriate romantic subjects.
  • OpenAI is introducing new parental controls for ChatGPT, allowing parents to link accounts, receive distress notifications, and set age-appropriate interaction rules.
  • These new safety measures by OpenAI come in response to a recent lawsuit and acknowledged safety concerns regarding ChatGPT's interactions with younger users.
  • ChatGPT currently requires users to be at least 13 years old, with parental permission necessary for those under 18, reinforcing the need for these upcoming controls.
Written by AI using shared reports from
4 articles
.

Report issue

Pano Newsletter

Read both sides in 5 minutes each day

Analysis

Compare how each side frames the story — including which facts they emphasize or leave out.

Center-leaning sources cover the story neutrally by presenting the announcements from OpenAI and Meta regarding AI chatbot safety for teens, while also providing crucial context from a recent lawsuit and an independent study. They avoid loaded language and ensure multiple perspectives, including expert criticism, are included to offer a balanced view of the developments and ongoing concerns.

"OpenAI said it would introduce what it called "strengthened protections for teens" within the next month."

BBC NewsBBC News
·15d
Article

"OpenAI and Meta are adjusting how their chatbots respond to teenagers and other users asking questions about suicide or showing signs of mental and emotional distress."

ABC NewsABC News
·15d
Article

"The planned parental controls represent OpenAI's most concrete response to concerns about teen safety on the platform so far."

ARS TechnicaARS Technica
·15d
Article

"OpenAI said Tuesday it plans to route sensitive conversations to reasoning models like GPT-5 and roll out parental controls within the next month — part of an ongoing response to recent safety incidents involving ChatGPT failing to detect mental distress."

TechCrunchTechCrunch
·15d
Article

Articles (4)

Compare how different news outlets are covering this story.

FAQ

Dig deeper on this story with frequently asked questions.

OpenAI is introducing parental controls that allow parents to link their accounts with their teenagers' ChatGPT accounts, receive notifications if their teen is distressed, and set rules for age-appropriate interactions.

The enhancements come in response to concerns over AI chatbots handling of sensitive topics, lawsuits, and the need to better detect and respond to distressed teenagers to prevent harmful conversations about self-harm, suicide, and inappropriate subjects.

ChatGPT currently requires users to be at least 13 years old, with parental permission needed for those under 18. The planned changes include parental account linking and enhanced safety features to monitor and control teen interactions.

The AI chatbots are being improved to better detect signs of emotional distress in teenagers, such as risks related to self-harm, suicide, disordered eating, and inappropriate romantic conversations.

OpenAI’s implementation of parental controls is seen as a crucial step that could set a safety standard across the AI industry, encouraging other companies to adopt similar safeguards for young and vulnerable users.

History

See how this story has evolved over time.

  • This story does not have any previous versions.