Meta may have sold seven million of its Ray-Ban smart glasses in 2025 alone — but likely didn’t anticipate the outpouring of criticism when a recent investigation by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten revealed that Meta’s subcontracted data annotators in Nairobi, Kenya, could’ve been watching users through their glasses’ cameras as they went to the bathroom or had sex.
The damning revelations shed light on the AI industry’s reliance on overseas labor for data labeling to train their models, a hidden reality glossed over in marketing materials by one of the biggest tech companies in the world.
Just days after the investigation was published, Meta has been hit with a class action lawsuit, which accuses the company of woefully misleading its customers by claiming that it had put privacy front and center.
“No reasonable consumer would understand ‘designed for privacy, controlled by you’ and similar promises like ‘built for your privacy’ to mean that deeply personal footage from inside their homes would be viewed and catalogued by human workers overseas,” reads the lawsuit, which was obtained by Futurism and filed in a San Francisco district court on Thursday.
“Meta chose to make privacy the centerpiece of its pervasive marketing campaign while concealing the facts that reveal those promises to be false,” the lawsuit charges.
“You cannot market a product as ‘built for privacy’ and then funnel footage of people’s intimate moments to contract workers without their knowledge,” said Yana Hart, partner at Clarkson Law Firm, which filed the lawsuit, in a statement. “Meta made privacy the centerpiece of its marketing campaign because it knew consumers would never buy these glasses if they knew the truth.”
The lawsuit “seeks to hold Meta responsible for its affirmatively false advertising and failure to disclose the true nature of surveillance and its connection to the company’s AI data collection pipeline.
A Meta spokesperson told Engadget that data from its glasses may end up in the hands of human contractors, but declined to respond to the lawsuit’s claims.
The spokesperson also claimed that “unless users choose to share media they’ve captured with Meta or others, that media stays on the user’s device.”
However, what Meta fails to explain is that using the devices’ core AI features without authorizing human contractors in Kenya to watch the resulting footage is impossible.
The lawsuit claims Meta did not adequately disclose that intimate footage could be reviewed and annotated by a human contractor. In other words, its smart glasses represent a major privacy liability.
“The undisclosed human review pipeline renders the Meta AI Glasses’ privacy features materially misleading, transforms the product from a personal device into a surveillance conduit, and exposes consumers to unreasonable risks of dignitary harm, emotional distress, stalking, extortion, identity theft, and reputational injury,” the document reads.
“The exposure of such content to thousands of unknown individuals creates a persistent and unreasonable risk of harm that Meta’s marketed privacy features were represented to, but do not, prevent,” it continues.
“Meta made a promise to millions of consumers while knowing full well it could not keep it,” said Clarkson Law Firm managing partner Ryan Clarkson.
“While the multi-trillion dollar tech titan attempted to reassure and placate consumers about these smart glasses through ads about privacy and control, workers thousands of miles away have been watching footage from inside people’s bedrooms all along,” he added. “That is not a technicality or an oversight — that is a system working exactly as designed, and it cannot be allowed to continue.”
Beyond the lawsuit, the latest revelations have resulted in netizens coining a new term for Meta’s product: “pervert glasses.”
More on the glasses: Meta Workers Say They’re Seeing Disturbing Things Through Users’ Smart Glasses


