When Meta researchers visited a German mother in April 2023, they heard something that would shake the company to its core. While the mother confidently told them she did not allow her sons to interact with strangers on VR headsets, her teenage son revealed a disturbing truth: adults had sexually propositioned his younger brother, who was under 10, numerous times. What happened next became the centerpiece of explosive whistleblower allegations that four current and former Meta employees have now brought to Congress.
According to these whistleblowers, Meta's legal team ordered the deletion of this testimony and all written records of the incident, despite its inclusion in an internal report about grooming fears. Jason Sattizahn, the Meta researcher who witnessed the interview, described watching "the mother's face in real time displayed her realization that what she thought she knew of Meta's technology was completely wrong."
The scope of the problem runs deeper than anyone imagined
The research being suppressed was not just anecdotal, it revealed systematic safety failures across Meta's VR ecosystem. A Florida Atlantic University study of 5,005 teens found that 32.6% of youth own VR headsets, with boys significantly more likely to own them than girls, 41% versus 25.1%. What many encounter inside these virtual spaces is deeply troubling.
The data tells a stark story: more than 44% of young VR users received hate speech or slurs, 37.6% experienced bullying, and 35% faced harassment. Most concerning, nearly 19% experienced sexual harassment, and 18.1% encountered grooming or predatory behavior. Girls experienced sexual harassment and grooming more frequently than boys, highlighting the gendered nature of these risks.
Independent investigations have corroborated these concerns. Fairplay's nine-month investigation found children under 13 regularly accessing Horizon Worlds using adult accounts, with researchers encountering 512 users across 26 visits, 170 of whom, about 33%, were clearly under 13 based on their voices and behavior.
The scale becomes even more striking when examining specific virtual environments. In the "VR Classroom" experience, at least 52% of users were identified as children, and in one session, 20 out of 27 participants had what researchers described as "obvious child voices." This is not just a few kids slipping through the cracks, it is systematic age restriction bypass on a massive scale.
How Meta allegedly shaped the narrative around child safety
The whistleblower documents, spanning thousands of pages over a decade, reveal what employees describe as a systematic effort to avoid liability. Internal documents show Meta lawyers advising researchers to avoid collecting data on children using VR devices due to "regulatory concerns", while instructing staff on handling sensitive topics that might attract negative publicity or regulatory scrutiny.
This approach extended beyond VR into the company’s broader research apparatus. The company has funded academic research projects that foster a more benign view of Instagram, helping support its contention that academic research on social media's impact remains inconclusive. Meta also created the Trust, Transparency & Control Labs that publishes reports supporting its kid-focused products.
Management of sensitive research even touched naming conventions. The documents describe "Project Salsa," aimed at creating tween accounts with parental supervision, and "Project Horton", a cancelled $1 million study to assess the effectiveness of Meta's age-verification tools. Notice how Project Horton was cancelled rather than completed. Employees, represented by nonprofit Whistleblower Aid, allege that Meta sought to "establish plausible deniability" regarding harmful effects of its products.
What is striking is how employees had to navigate internal secrecy measures. The company tried to keep its plan, codenamed 'Project Salsa,' from leaking, involving employees signing legal disclosures and marking documents as 'A/C privilege.' This level of secrecy around child safety initiatives suggests the company understood the potential controversy.
The company's response reveals the stakes involved
Meta's defense has been characteristically aggressive. Spokeswoman Dani Lever dismissed the allegations as "based on a few examples stitched together to fit a predetermined and false narrative", asserting that any data deletion complied with US and European privacy laws. The company points to protections it has implemented, including parental controls and default settings that limit teen interactions to known contacts.
But the financial context makes these safety concerns even more pressing. Meta's VR division, Reality Labs, has sold 20 million headsets by 2023 while losing more than $60 billion over five years. With such massive investments at stake, the pressure to expand the user base, including younger users, becomes a critical business imperative.
The timing of Meta's policy changes is particularly telling. Employees warned that children under 13 were bypassing age restrictions, but parental controls were only implemented after the Federal Trade Commission began investigating the company’s compliance with children’s privacy law. In 2023, Meta lowered the minimum age for Quest headsets from 13 to 10, a move that coincided with increased regulatory scrutiny.
Meta’s public rationale for that change is straightforward, Meta is doing this because kids want to use VR headsets and it is better to give them a more restricted experience. There is also a defensive element, Meta wants to get ahead of any potential lawsuits or fines, like the $520 million one recently levied by the FTC against Epic Games.
What this means for the future of VR safety
The implications extend far beyond Meta's corporate practices. Research shows that with the metaverse offering richer emotional experiences, youth may be particularly vulnerable to significant harm in these immersive spaces. The immersive nature of these spaces can amplify experiences and emotions, which makes traditional safety measures feel thin.
Current safety infrastructure has critical gaps. While girls are more likely to use in-platform safety measures like 'Space Bubble' and 'Personal Boundary', overall youth tend to use these safety features infrequently, so the tools exist, but kids are not using them consistently.
This oversight gap mirrors the German mother's experience from our opening story. Most VR headsets lack comprehensive parental control options comparable to smartphones, leaving parents who believe they understand the technology to discover they lack crucial visibility into their children’s virtual interactions. They are essentially flying blind, learning that their understanding of their children’s digital experiences may be "completely wrong."
The Senate Judiciary subcommittee will examine these claims in a hearing, a critical moment for VR safety regulation. As researchers note, it is "all hands on deck" to build a safer and more inclusive metaverse as the technology continues evolving.
But we are working backwards. As academic research on VR's effects on children has only just begun, we are conducting what amounts to a massive real-world experiment with millions of young users. The research that does exist offers some reassurance, 94% of children in a study experienced no significant changes to visual-motor function after VR sessions, but the social and emotional safety questions remain largely unanswered.
The reckoning that’s been years in the making
These whistleblower revelations represent more than corporate malfeasance, they expose the fundamental tension between rapid technological expansion and user safety, particularly for vulnerable populations. This institutional approach to minimizing harm evidence, first documented in Meta's Instagram operations, appears to have extended into the company's VR division. Previous Meta whistleblower Arturo Bejar testified that the company fosters a culture of "see no evil, hear no evil" that overlooks evidence of harm while presenting carefully crafted metrics to downplay issues.
Bejar's testimony provides crucial context for understanding Meta's broader institutional approach. He shared evidence of his conversations with Meta executives, including Chief Product Officer Chris Cox, where he presented research on teen harm. According to Bejar, Cox expressed awareness of concerning statistics yet failed to take what the engineer felt would be appropriate action, something Bejar found "heartbreaking." This pattern of awareness without adequate response appears to extend into the VR space.
The German mother's story, her face showing in real time that her understanding of Meta's technology was "completely wrong", serves as a powerful metaphor for society's broader awakening to VR safety risks. Here was a conscientious parent who thought she had control over her children’s digital experiences, only to discover that the virtual world had been exposing her young son to sexual predators without her knowledge.
What is particularly troubling is how this mirrors broader patterns in Meta's approach to child safety across platforms. The company has faced criticism for Instagram's recommendation algorithms connecting predators to a "vast" pedophile network, and last month, the Federal Trade Commission claimed that Meta "repeatedly violated its privacy promises" by profiting off data collected from young users.
The question now is not whether Meta knew about these risks, the whistleblower documents make that abundantly clear. The question is whether regulatory action and industry accountability will emerge quickly enough to protect the millions of young users already immersed in these virtual worlds, and whether the promise of the metaverse can be realized without sacrificing the safety of its most vulnerable users.
As we stand at this crossroads, the stakes could not be higher. VR technology holds genuine promise for education, social connection, and creative expression. But if we cannot figure out how to make these spaces safe for children, if we continue prioritizing growth over protection, we risk creating a generation whose first experiences with immersive technology are defined by harassment, exploitation, and harm. The German mother's shocked realization should serve as a wake up call, what we think we know about keeping our children safe online may be completely wrong.
Comments
Be the first, drop a comment!