Meta launches new teen safety features, removes 635,000 accounts that sexualize children
Briefly

Meta has launched safety features designed to better protect teens using its platforms. These include improved blocking and reporting tools and information about the accounts that message young users. The company has removed thousands of accounts responsible for inappropriate comments or interactions with children under 13. Teen users have actively blocked and reported millions of accounts following prompts emphasizing safety. Additional measures include using artificial intelligence to verify users' ages and ensuring teen accounts are private by default. These changes come amidst growing scrutiny of social media's impact on youth mental health.
Meta has removed thousands of accounts leaving sexualized comments on kids' accounts, aiming to protect young users from predatory behavior. Specifically, 135,000 comments were flagged, and 500,000 accounts were linked to inappropriate interactions.
Meta introduced new safety features for teens, allowing easier blocking and reporting of accounts, alongside information about messaging accounts. Over one million accounts were blocked or reported after a safety notice was issued.
The use of artificial intelligence is being tested to identify users misrepresenting their ages on Instagram, with accounts automatically converted to teen accounts if age discrepancies are found.
Teen accounts are set to become private by default in 2024, restricting private messages to those only from users they follow or are connected with.
Read at www.bostonherald.com
[
|
]