
Instagram’s Design Makes It Unsafe for Teens, Report Shows
A new report by four nonprofits and former Meta employee and whistleblower Arturo Bejar claims that Instagram’s “deliberate design choices” make it unsafe for teens, despite Meta’s promises. The research shows that only 8 out of 47 safety features for teens are fully effective.
Penned by the nonprofits Cybersecurity for Democracy, Fairplay, Molly Rose Foundation, and Parent for Safe Online Spaces, the September 9 report used test accounts (some of them for underage users) and checked different interactions between them to analyze the platform’s safety features.
Through this process, the authors tested 47 of Meta’s 53 teen safety features on Instagram. The report found that the platform’s algorithm encouraged underage users to perform sexually suggestive behavior, recommended content that promoted self-harm and suicidal ideation, displayed graphically violent content, failed to provide teens with tools to report sexual harassment, and more.
Findings suggest that only 8 of these safety tools worked as intended, 9 “reduced harm, but came with notable limitations,” and 30 simply didn’t work or were no longer available.
According to the researchers, Meta has elected to forego taking “real steps” to increase the safety of teenagers on the platform, “opting instead for splashy headlines about new tools for parents and Instagram Teen Accounts for underage users.”
The authors also made special note that their focus was on the design of Instagram as a social media platform, not content moderation. “This distinction is critical because social media platforms and their defenders often conflate efforts to improve platform design with censorship,” the report says. “Holding Meta accountable for deceiving young people and parents about how safe Instagram really is, is not a free speech issue.”
Meta called the report “misleading” and “dangerously speculative.”
“This report [misstates] how our safety tools work […]. Teen Accounts lead the industry because they provide automatic safety protections and straightforward parental controls,” said the company, which claims its “teen accounts” feature, introduced roughly a year prior to the report, actively protects teenagers.
“Until we see meaningful action, Teen Accounts will remain yet another missed opportunity to protect children from harm, and Instagram will continue to be an unsafe experience for far too many of our teens,” the report says.
Earlier in September, former and current Meta employees submitted documents to the US Congress that allegedly show the company suppressed its research findings on child safety.