Instagram Introduces Protection Feature Amid Safety Concerns

Written by Camilla Jessen

Apr.11 - 2024 10:39 AM CET

Technology
Photo: Angie Yeon / Shutterstock.com
Photo: Angie Yeon / Shutterstock.com
Instagram introduces a new feature.

Trending Now

Instagram is rolling out a new feature designed to protect teenagers from unsolicited nudity in direct messages, a move by parent company Meta to address growing concerns over the safety of young users on its platforms.

The feature, according to Reuters, comes as Meta faces increased scrutiny in both the United States and Europe over the mental health impact of its social media apps.

Enhanced Safety Measures

The newly introduced feature on Instagram employs on-device machine learning to detect and blur images containing nudity in direct messages. This protective measure will be activated by default for users under the age of 18, with Meta also planning to prompt adult users to enable this feature.

"Because the images are analyzed on the device itself, nudity protection will also work in end-to-end encrypted chats, where Meta won't have access to these images – unless someone chooses to report them to us," Meta explained.

In addition to safeguarding against explicit content, Meta is developing technology aimed at identifying accounts potentially involved in sextortion scams.

The company is experimenting with new pop-up messages designed to warn users who may have interacted with such accounts, further bolstering its efforts to protect its community from online exploitation.

A Response to Growing Scrutiny

Meta's latest safety features for Instagram come amid increasing scrutiny from regulators in the United States and Europe. The tech giant faces allegations that its platforms contribute to addiction and mental health issues among teens.

In January, Meta announced measures to hide sensitive content from teenage users on Facebook and Instagram, including topics related to suicide, self-harm, and eating disorders.

The company's proactive stance on user safety has also been highlighted by legal actions; in October, attorneys general from 33 U.S. states sued Meta, accusing it of misleading the public about the risks associated with its platforms.

The European Commission has also inquired about Meta's efforts to shield children from illegal and harmful content.