Semafor logo
CBS News logo
Wired logo
6 articles
·2M

OpenAI Introduces New Safety Measures for ChatGPT Users Under 18 Amid Regulatory Scrutiny

OpenAI is implementing new safety measures for ChatGPT users under 18, including content blocking, age verification, and parental controls, in response to regulatory concerns about minors.

Subscribe to unlock this story

We really don't like cutting you off, but you've reached your monthly limit. At just $5/month, subscriptions are how we keep this project going. Start your free 7-day trial today!

Get Started

Have an account? Sign in

Overview

A summary of the key points of this story verified across multiple sources.

  • OpenAI is implementing new safety measures for ChatGPT users under 18, including automatic routing to a modified experience that blocks graphic sexual content and limits flirty conversations.
  • The company is developing an age verification system to ensure adults are using the chatbot, aiming to provide a more unrestricted experience for verified adult users.
  • Parental controls are planned to launch by the end of September, enhancing safety features and addressing concerns about minors engaging with AI chatbots.
  • OpenAI is under scrutiny from US regulators regarding the potential risks of its chatbot to young people, prompting these new safety initiatives and a teen-friendly version.
  • A critical safety feature includes the system contacting parents or authorities if an under-18 user expresses distress, suicidal ideation, or considers self-harm.
Written by AI using shared reports from
6 articles
.

Report issue

Pano Newsletter

Read both sides in 5 minutes each day

Analysis

Compare how each side frames the story — including which facts they emphasize or leave out.

Center-leaning sources frame this story with skepticism regarding OpenAI's new age verification and parental control plans. They emphasize the technical hurdles, the "unproven technology," and the "privacy compromise" for adults. The coverage links these measures directly to the "tragically consequential" Adam Raine suicide case, portraying them as a reactive response to past failures rather than a robust, proactive solution.

"OpenAI on Tuesday announced a version of ChatGPT for teens, as tech companies face growing pressure to protect minors who use chatbots."

SemaforSemafor
·2M
Article

"OpenAI announced Tuesday that it is directing teens to an age-appropriate version of its ChatGPT technology as it seeks to bolster safeguards amid a period of heightened scrutiny over the chatbot's safety."

CBS NewsCBS News
·2M
Article

"OpenAI acknowledged that developing effective age-verification systems isn't straightforward."

ARS TechnicaARS Technica
·2M
Article

"OpenAI announced Tuesday that it plans to implement a new age verification system that will help filter underage users into a new chatbot experience that is more age-appropriate."

GizmodoGizmodo
·2M
Article

"OpenAI announced today that it's developing a "different ChatGPT experience" tailored for teenagers, a move that underscores growing concerns about the impact of AI chatbots on young people's mental health."

CNETCNET
·2M
Article

Articles (6)

Compare how different news outlets are covering this story.

FAQ

Dig deeper on this story with frequently asked questions.

OpenAI has introduced an automatic routing system that directs users under 18 to a modified ChatGPT experience which blocks graphic sexual content and limits flirty conversations. They are developing an age verification system to ensure adults get an unrestricted experience, and parental controls are planned to launch by the end of September. Additionally, the system can contact parents or authorities if an under-18 user expresses distress, suicidal ideation, or self-harm intent.

OpenAI is introducing these safety features in response to regulatory scrutiny from US agencies like the FTC and lawsuits related to ChatGPT's impact on minors, including cases where conversations with the chatbot were linked to mental health crises and suicides. The company aims to address concerns about the potential risks AI chatbots pose to young users.

OpenAI is developing new age prediction technology to detect users under 18 and automatically reroute them to a safer, teen-appropriate ChatGPT experience. In cases where there is any uncertainty, the system defaults to the under-18 experience. Parental controls will give guardians oversight over their children's interactions, and in cases of acute distress or self-harm risk, the system can notify parents or authorities.

One significant challenge is that ChatGPT's safety protocols can degrade over longer conversations, increasing the risk of harmful content. Additionally, balancing user experience with stringent safety measures, developing effective age verification, and monitoring for mental health crises without violating privacy are complex issues OpenAI is actively addressing.

Regulatory bodies, such as the US Federal Trade Commission, have launched investigations into how AI chatbots affect children and pressured OpenAI to implement safety assessments and controls. This external scrutiny directly influences OpenAI's introduction of new measures like age verification, content filtering, and parental controls to comply with evolving regulations and protect minors.

History

See how this story has evolved over time.

  • This story does not have any previous versions.