


Meta Unveils Enhanced Safety Features for Teen Users
Meta launched new safety features for teens on Instagram and Facebook, using AI to detect underage users and automatically privatize accounts, blocking and removing accounts sexualizing children.
Subscribe to unlock this story
We really don't like cutting you off, but you've reached your monthly limit. At just $5/month, subscriptions are how we keep this project going. Start your free 7-day trial today!
Get StartedHave an account? Sign in
Overview
- Meta has implemented new safety features across its platforms, including Instagram and Facebook, specifically designed to protect teen users from harmful content and interactions.
- These new measures include the use of AI to detect underage users and have led to the removal of 635,000 accounts found to be sexualizing children.
- Teen accounts are now automatically set to private, with new restrictions on private messages and enhanced tools for blocking and reporting suspicious accounts.
- Over a million accounts have been blocked or reported due to these new safety protocols, demonstrating the immediate impact of Meta's updated features.
- Protections are also being extended to adult accounts that share content related to children, aiming to create a safer online environment for all young users.
Report issue

Read both sides in 5 minutes each day
Analysis
The reporting appears neutral by presenting Meta's new safety features and account removals alongside the ongoing legal challenges and scrutiny the company faces regarding youth mental health. It attributes information clearly and avoids loaded language in its own descriptions, offering a balanced view of both Meta's proactive steps and the criticisms against it.
Articles (3)
Center (3)
FAQ
No FAQs available for this story.
History
- This story does not have any previous versions.