BeeFiny Logo Visit the website

Instagram's Teen Safety Features Deemed Largely Ineffective by New Study, Meta Disagrees

Published on: 03 October 2025

Instagram's Teen Safety Features Deemed Largely Ineffective by New Study, Meta Disagrees

The investigation, in which Northeastern University also participated, reveals that the security features in the app do not match the company’s promises

Instagram has announced more than 50 measures to protect minors over the years. Do they work as promised? Very few actually do. A group of researchers, led by former Meta employee Arturo Béjar, who has previously denounced the company’s practices, analyzed 47 of these features: 30 don’t work, no longer exist, or are very easy to circumvent, and another nine have limitations. Only eight work as intended. The report is titled Teen Accounts, Broken Promises.

Some of these features prevent teens from viewing violence or content about dieting and sex, receiving messages from suspicious adults, or creating an account when they’re under 13 and allowing their videos to circulate freely online. The research has been corroborated by academics at Northeastern University in Boston. Three parent organizations concerned with adolescents’ digital health also participated in the report.

Meta disputes the study’s conclusions and accuses it of repeatedly misrepresenting its work protecting teens. The study investigated these new Teen Accounts as well as other features specific to minors.

“When I started checking what was working and what wasn’t, my intention was just to talk about it all in a precise way, but I was very surprised to discover that everything was so bad,” Béjar told EL PAÍS by phone. He then contacted a center at Northeastern University dedicated to analyzing digital threats. They followed their usual method for detecting problems, but this time focusing on teenagers and Instagram: isolating each of the functions, designing controlled tests, and observing the potential behavior of young people and parents.

For Meta, this isn’t enough to understand how Instagram really works: “Teens under these protections [for teen accounts] viewed less sensitive content, received fewer unwanted contacts, and spent less time on Instagram at night,” says the company, which announced in June that teens had blocked accounts one million times and reported another million due to security warnings from the platform. The question is, what is the actual proportion of this million in relation to the total number of suspicious accounts circulating on the app?

Béjar shared with EL PAÍS videos and screenshots showing eight- and 10-year-old girls replicating a video in which they reveal their name, age, height, zodiac sign, and other details. Many of these videos, as this newspaper has been able to verify, are still accessible on Instagram for adult users. The minimum age to have an Instagram account is 13.

“I found eight- and nine-year-old girls making videos that, to them, seem innocent,” Béjar says, “but this network distributes them to pedophiles.” “The worst of all was a girl copying another video that said, ‘Add a red heart if you think I’m cute. A yellow one if I’m fine. And a blue one if you think I’m ugly.’ It had a million views. Another little girl, about seven or eight, copied it, and it was seen 250,000 times, and there were comments from men with the emoji of a licking tongue, horrible things,” he adds. The report includes screenshots of some of these videos.

From succeeding in Silicon Valley to reporting

Béjar left Meta in 2015 after six years and then worked as an external consultant between 2019 and 2021. In 2015, he was chosen by this newspaper as one of the 20 Latinos who “succeeded in Silicon Valley.” In 2023, he testified before U.S. Congress about how his youngest daughter received messages on Instagram from adults trying to have a relationship with her.

Meta counters that some features have changed names or that there are functions that depend on who sends the message first or whether the teenager reports or limits what they see. For Béjar, this approach is wrong. All technology companies know how to force users to activate functions in their apps: it depends on the color of the button, the location, the number of clicks required. If the restrictions aren’t placed in prominent places and with appropriate language, they won’t be activated.

“You know when the company wants you to use something,” says Béjar. “They also know how to make you not use something; they complicate it, make it difficult,” he adds. This was part of his job when he was at Meta: how to make the language appropriate for young people. Perhaps for them, “reporting” sounds too similar to snitching or ratting out, and they need a different wording.

The study is divided into four main sections: Inappropriate conduct and contact, Sensitive content, Time spent and compulsive use, and Age verification, minors and sexualized content. The worst-performing category for Instagram is sensitive content, where all features received negative ratings: it’s too easy for young people to view violent and sexual content (drawings and descriptions), and videos promoting self-harm or radical diets.

Worse in Spanish

The study is in English, but Béjar is also a Spanish speaker: these searches are much worse in Spanish, he says, and in languages other than English. “I didn’t anticipate that if you started typing in Spanish ‘I want to k,’ it would recommend writing ‘I want to kill myself and I want to kill another person,’” says Béjar. “In my tests, nothing worked in Spanish. In other languages, I imagine it would be even worse. You could search in Spanish ‘I want to lose weight,’ and it would recommend pills, always from devices activated with a teen account. I also tried in Spanish ‘you are a whore, kill yourself’ as a comment, and nothing happened,” he adds.

One of the authors’ goals is for this research method to become standard practice in the industry. “Independent scenario testing should become a standard practice, carried out not just by researchers but also by regulators and civil society to answer questions about platform functionality,” the report states. “Treating safety tools with the same rigor that cybersecurity applies to other critical technologies is the only way to know whether platforms are keeping their promises.”

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

[SRC] https://english.elpais.com/technology/2025-10-02/a-study-by-former-meta-employee-claims-instagrams-teen-protection-measures-dont-work-well.html

Related Articles