
Meta Allegedly Suppressed Child Safety Research
Two former and two current Meta employees disclosed documents to the US Congress that allegedly show that Meta sought to suppress research about safety risks in its virtual reality (VR) products for children, according to a September 8 exposé by the Washington Post.
Back in 2021, whistleblower Frances Haugen leaked internal documents revealing that Meta’s own research had shown Instagram could have negative effects on teenage girls’ mental health. This prompted years of congressional scrutiny of Meta’s practices.
According to the Post’s report, it also resulted in the company changing its internal policies regarding the research of topics deemed “sensitive,” such as gender, race, and harassment. The whistleblowers claim this change aimed to give Meta’s legal team “plausible deniability” about the dangers of the company’s products.
The proposed changes by Meta included adding lawyers to the research process so that the results could be covered by the attorney-client privilege. Another suggestion included changing the wording of the research results to avoid explicit terms that could negatively implicate the company, such as “illegal” or “not compliant.”
According to one of the whistleblowers, who was working as a researcher for Meta in Germany at the time, the company explicitly instructed him to omit some of his findings. One of those findings included the testimony from a teenage boy who said his younger brother, who was under 10, had been sexually propositioned repeatedly while using Meta’s VR technology.
Instead, the delivered report simply stated that German parents had concerns regarding grooming.
Another report shows instructions by a Meta lawyer to a researcher to avoid collecting evidence that children were using VR devices altogether “due to regulatory concerns.”
“These few examples are being stitched together to fit a predetermined and false narrative; in reality, since the start of 2022, Meta has approved nearly 180 Reality Labs-related studies on social issues, including youth safety and well-being,” said a Meta spokesperson.
The material given to Congress spans thousands of pages of internal documents gathered over a decade.
Meta has repeatedly faced scrutiny for its overall approach to children’s safety. Just last month, a different set of leaked documents showed that Meta’s AI chatbot had been allowed to engage in “romantic” conversations with children.