
A new report by Meta Whistlebloor and University researchers found that many of Instagram Teen safety facilitiesCongress and public pressure were introduced over the years, decreased its objectives and fail Protect young users,
Nearly two-thirds of the security facilities of social media platforms were considered ineffective or they were no longer available, according to the report Former Facebook Engineering Director Arturo Bezer and a research project from Cybercity for Democracy, New York University and Northeastern University.
Ian Russell and Maurin Molak, who lost children, are ineffective in the new safety measures of Meta. ” Cyberbulling and Exposure Depression and Suicide material On Instagram, the report was written in a prediction.
Russell’s Molly Rose Foundation and Molak’s parents were also scheduled to participate in the report with the children’s safety group Fairplay.
He said, “If anyone believes that the company will voluntarily change on its own and give priority to the youth on engagement and profits, then we hope that this report will rest for all and everyone,” he said.
Of the 47 security features tested by the researchers, 64 percent received a “red” rating, indicating that it was no longer available or that “it was easy to ignore or exit with” with an effort of “less than three minutes.”
This included facilities, such as Scrutiny For some keywords or aggressive materials in the comments, warning on caption or chat, some blocked capabilities and Messaging ban Between adults and adolescents.
Another 19 percent of Instagram’s security facilities obtained “yellow” ratings from researchers, who found that they had reduced the loss but still faced boundaries.
For example, the report has evaluated a feature that allows users to swipe to remove improper comments as “yellow” as accounts can only continue commenting and users cannot provide a reason for their decision.
Ancestral Supervision equipment It can restrict the use of a teenager or provide information about that when their child makes a report, it was also placed in this middle class because they were unlikely to be used by many parents.
Researchers gave the remaining 17 percent safety equipment – such as the ability to shut down comments, restrictions that can tag teen and equipment and refer to that the parents can motivate the parents to approve or deny the change in the default settings of their children’s accounts – a “green” rating.
Instagram and Facebook’s original company Meta called the report “misleading and dangerous speculative” and suggested that it reduces interactions around adolescent safety.
“This report repeatedly presents our efforts to empower the parents and protect adolescence, explain how our safety equipment works and how millions of parents and teenagers are using them today,” the company said in a statement shared with the hill.
“The reality is that the teenagers kept in these security were seen to be less sensitive materials, experienced less unwanted contact, and spent short time on Instagram at night,” the company continued. “Parents also have strong equipment on their fingers, from limiting use to monitoring. We will continue to improve our equipment, and we welcome creative response – but this report is not so.”
Meta expressed concern about how the report assessed the safety update of the company, given that many devices worked as a design, but obtained a yellow rating as researchers suggested that they should go further.
A blocked feature received a “yellow” rating, even though the report admitted that it works, as the users cannot provide the reason why they wish to block an account, who would be “an invaluable signal to detect malicious accounts” said that researchers said.
Bejar, former Facebook employees who participated in the report, Testified to Congress In 2023. He accused Top officials of the company dismissed the warnings about unwanted sexual advances and teenagers experienceing bullying on Instagram.
His testimony came after two years Facebook whistleblower Frances Hagen came forward with allegations that the company knew about the negative mental health effects of their products on young girls.
After about four years, Many more meta whistles The company’s safety practices have come forward with new concerns. Six current and former employees accused the company of doctors or banning internal security research earlier this month, especially about young users on their virtual and promoted reality platforms.

