Meta Unveils New Safety Tools to Shield Teens, Children Online

News Desk

Islamabad: Social media giant Meta has unveiled a fresh set of safety tools and updates aimed at enhancing protection for teens and children across its platforms, including Instagram and Facebook.

The newly introduced measures focus on direct messaging, nudity protection, and safeguarding accounts managed by adults that prominently feature children.

The move is part of Meta’s broader efforts to create a safer online environment, especially for younger users who are increasingly vulnerable to exploitation and abuse.

For teen users, Meta is enhancing direct messaging (DM) features by providing more contextual information about who they’re interacting with. This includes displaying account creation dates, safety tips, and a new streamlined option to block and report users in one step.

In June alone, Meta reported that teens used safety notices to block one million accounts and reported another one million after viewing the warnings.

To counter the growing threat of cross-border sextortion scams, Meta is rolling out a “Location Notice” feature on Instagram. The alert notifies users when they’re chatting with someone in a different country—a tactic often used by predators to target minors.

According to the company, more than 10 percent of users tapped the notice to learn how to protect themselves.

Nudity protection remains a core part of Meta’s safety toolkit. The feature automatically blurs suspected nude images received in DMs and prompts users with a warning.

Meta says 99 percent of users, including teens, have kept this feature active. Notably, nearly 45 percent of users opted not to share the explicit images after seeing the warning.

In a major update for adult-managed accounts that showcase children—such as those run by parents, influencers, or talent agents—Meta will now apply teen-level protections. These include strict message controls and the automatic filtering of offensive comments through its Hidden Words feature.

The platform will also limit the discoverability of such accounts by suspicious users, reducing the risk of unwanted contact.

To further tighten control, Meta is removing certain monetization options from these accounts, such as receiving gifts or offering paid subscriptions.

The company also revealed that it recently removed nearly 135,000 Instagram accounts for leaving sexualized comments or requesting explicit content from accounts featuring children under 13. An additional 500,000 linked accounts were taken down across Instagram and Facebook.

Meta emphasized its ongoing collaboration with other tech companies through the Tech Coalition’s Lantern program to track and prevent harmful users from reappearing on other digital platforms.

These updates are part of Meta’s sustained commitment to child safety, amid growing scrutiny of online platforms and their role in protecting young users.

Comments are closed.