A Cailian investigation has lifted the veil on a hidden stage of the AI gaze: human contractors in Kenya tasked with labelling visual interactions say they have been shown large numbers of intimate, private clips captured by Meta’s AI-enabled smart glasses. The disclosures underscore a tension at the heart of consumer-facing multimodal AI — devices that promise seamless, hands-free intelligence can also stream sensitive moments into corporate training pipelines and contractor workflows far from the user’s home.
Multiple Kenyan data labelers told investigators that the datasets supplied to them for annotation contained recordings from users of Meta’s Ray-Ban/Meta smart glasses. The material reportedly included scenes in private bedrooms and footage of non-users captured without their knowledge. One worker described a clip of a man leaving his glasses on a bedside table before a likely partner entered the room to change clothes, while others reported seeing bank card details and partially unblurred faces despite automated masking tools.
Meta has pointed to the language in its service terms and AI policies that permits third-party review of interactions between users and Meta AI for quality control and training. The company said it sometimes hires contractors to audit data, that it uses filtering and automated redaction to protect privacy, and that video remains local if a user does not share it with others or submit it to Meta AI. A Meta spokesperson emphasized routine contractor involvement and promised measures to prevent identification where possible.
Contractors and investigators say the technical protections are imperfect. Meta deploys automated face-blurring in labelled data, but labellers reported that the tool does not always perform as intended and that partial faces remain visible. The presence of non-user subjects and exposed financial information in some clips intensifies the ethical and legal stakes: consent cannot be assumed when bystanders are filmed, and sensitive identifiers raise risks of harm beyond embarrassment.
The issue is not theoretical. Meta’s smart glasses, developed in partnership with EssilorLuxottica, have moved rapidly toward mass-market scale: partner disclosures place 2025 unit sales at more than 7 million pairs, with expectations that 2026 could push total shipments close to or beyond 10 million. At the same time Meta’s privacy policy changes last year removed some user choices: the glasses’ camera remains enabled by default unless the “Hey Meta” voice assistant is explicitly turned off, and users no longer have the option to prevent their voice recordings from being stored in the cloud.
Regulators are already taking notice. The UK’s Information Commissioner’s Office has said it will press Meta for explanations of its data-handling practices and whether the company complies with local data-protection rules. The ICO framed the problem plainly: devices that process personal data must give users control and transparency about what is collected and how it is used. This inquiry could presage broader scrutiny across jurisdictions where data-protection regimes are robust.
The revelations expose structural risks in the AI training supply chain. Human review remains an essential step for many multimodal systems — yet it creates cross-border flows of sensitive material to low-cost contracting hubs, where disparate legal regimes and operational lapses can compound privacy harms. For consumers, the practical choice about whether to leave a wearable’s camera enabled now sits alongside questions of meaningful consent, notice and recourse when third parties may see private moments.
For Meta and rival makers of camera-equipped AI devices, the fallout is straightforward: trust is a fragile asset for consumer hardware that invades intimate spaces. Unless companies adopt stronger defaults — on-device processing, genuine opt-outs, robust and verifiable anonymization, and clearer consent mechanisms for bystanders — regulators and users may slow adoption or force more prescriptive rules that constrain data flows and model training practices. The debate over convenience versus privacy is moving from abstract policy rooms into bedrooms, and the industry will be judged on how it responds.
