CNET logo
Washington Examiner logo
Fortune logo
3 articles
·21d

California Enacts New Laws to Regulate AI Chatbots and Protect Minors

California Governor Gavin Newsom signed legislation to regulate AI chatbots, implementing protocols for identifying self-harm, requiring crisis service referrals, and mandating disclosure that users are interacting with a machine to protect minors.

Subscribe to unlock this story

We really don't like cutting you off, but you've reached your monthly limit. At just $5/month, subscriptions are how we keep this project going. Start your free 7-day trial today!

Get Started

Have an account? Sign in

Overview

A summary of the key points of this story verified across multiple sources.

  • California Governor Gavin Newsom signed new legislation aimed at regulating artificial intelligence chatbots to safeguard children and teens from potential online risks.
  • The new laws require companion chatbot companies to establish clear protocols for identifying and addressing instances of suicidal ideation or self-harm among users.
  • Companies must refer users exhibiting signs of distress to appropriate crisis services and provide relevant usage statistics to the California Department of Public Health.
  • The legislation also mandates that AI chatbot companies disclose to users when they are interacting with a machine, ensuring transparency in digital interactions.
  • These measures follow concerns from the Federal Trade Commission regarding AI chatbots' potential harm to children's mental health, with California leading in protective regulations.
Written by AI using shared reports from
3 articles
.

Report issue

Pano Newsletter

Read both sides in 5 minutes each day

Analysis

Compare how each side frames the story — including which facts they emphasize or leave out.

Center-leaning sources cover California's new AI chatbot law neutrally, focusing on factual reporting of the legislation's requirements and context. They present various stakeholders' perspectives, including Governor Newsom's rationale, industry responses, and prior regulatory scrutiny, without injecting editorial bias or loaded language. The coverage prioritizes informing readers about the law's specifics and its broader implications for AI safety.

"The law, SB 243, also requires companion chatbot companies to maintain protocols for identifying and addressing cases in which users express suicidal ideation or self-harm."

CNETCNET
·21d
Article

"California Gov. Gavin Newsom on Monday signed legislation to regulate artificial intelligence chatbots and protect children and teens from the potential dangers of the technology."

FortuneFortune
·21d
Limited access — this outlet restricts by article count and/or content type.
Article

Articles (3)

Compare how different news outlets are covering this story.

FAQ

Dig deeper on this story with frequently asked questions.

Under the new California law (SB 243), AI companion chatbot companies must implement age verification, alert users every three hours that they are interacting with a chatbot (not a human), provide crisis intervention resources to users expressing self-harm or suicidal ideation, and prevent minors from viewing sexually explicit images. They must also disclose when interactions are AI-generated and cannot represent themselves as healthcare professionals[1][3]. Companies are required to report the frequency of crisis service notifications and their protocols to the California Department of Public Health[3].

The legislation was driven by several tragic incidents, including the suicide of a teenager after discussing plans for self-harm with an AI chatbot, and the lawsuit against Character AI following the death of a 13-year-old girl after engaging in problematic conversations with the company's chatbots. Additionally, leaked internal documents revealed that some chatbots were allowing romantic and sensual chats with children, further highlighting the need for regulation[1].

The law holds all companies—from major labs like OpenAI and Meta to smaller startups—legally accountable for ensuring their chatbots comply with the new safety standards, including measures to protect children and vulnerable users. These companies have already begun modifying their chatbots to include features such as parental controls and blocking conversations about self-harm for teenagers.

California is also enacting legislation addressing social media addiction, strengthening privacy requirements, and increasing transparency online. These efforts are part of a broader campaign to safeguard children from the dangers of emerging technology, with the state aiming to balance innovation and child safety[3].

Technology companies have reportedly lobbied heavily against these regulations, spending at least $2.5 million in the first half of the legislative session to oppose new rules. Despite this, companies like OpenAI and Meta have already made changes to their platforms in anticipation of the law, such as adding parental controls and filtering out harmful conversations for minors[2].

History

See how this story has evolved over time.

  • This story does not have any previous versions.