⏱️ 3.7 Mins Read
Is the ‘Glasshole’ Effect 2.0 about to happen? Could major fashion conglomerates like EssilorLuxottica — parent company of Ray-Ban and Oakley — distance themselves from Meta smart glasses altogether? What would be the prospect of the multi-billion-dollar smart glasses industry?
📌 Executive Brief
- The scandal: A Kenyan digital rights group has petitioned regulators to investigate Meta’s smart glasses privacy scandal for unlawful data harvesting. Workers at subcontractor Sama endure 10-hour shifts reviewing deeply private footage, with no way to flag disturbing content without risking their jobs.
- The privacy gap: Meta claims footage is auto-blurred, but workers say the system regularly fails. The “manual review” disclosure exists in fine print — users aren’t told that a contractor abroad may be watching their most private moments.
- The economics: Routing data annotation to lower-income countries lets Meta avoid Western labour costs and privacy laws. Factoring in true costs would likely triple or quadruple the device’s price — killing its mass-market appeal.
- The stakes: A ruling against Meta in Kenya could trigger investigations globally and shake consumer confidence in a $15B industry. Fashion partners like Ray-Ban could also walk away if public perception turns.
- The fix: Federated learning and synthetic data could eliminate the need for human-reviewed footage — but both are currently more expensive than simply outsourcing the work.
Someone in Nairobi Is Watching You
In BeyondNewsReport, we examine all the possibilities — particularly after revelations that your most private moments may no longer be private, as contract workers in isolated clean-rooms in Nairobi are reviewing and labelling footage captured by your smart glasses to train AI models, without your knowledge or consent.
According to the recent report, a Kenyan digital rights organization, The Oversight Lab, has formally petitioned the country’s data protection regulator to investigate whether footage captured by Ray-Ban Meta Smart Glasses is being unlawfully harvested to train Meta’s AI systems — without the knowledge or consent of those recorded. The complaint centers on contract workers in Nairobi who review and label sensitive footage, including bathroom visits, intimate encounters, and financial details, collected from glass users worldwide.
Annotators working for Meta’s subcontractor, Sama, in Nairobi endure gruelling 10-hour shifts. Because of the extreme sensitivity of the footage, workers operate in highly monitored, isolated clean-rooms where personal phones and recording devices are strictly banned.
The cybersecurity news platform Help Net Security, in its report, revealed that workers employed by Meta’s subcontractor, Sama, in Nairobi, routinely review first-person video containing deeply private moments: people using the toilet, changing clothes, engaging in sexual acts, or inadvertently recording bank cards and computer screens. Meta claims faces and sensitive data are automatically blurred; however, workers confirm the algorithms frequently fail, particularly in low lighting.
Workers are not warned up front about the intimacy of the footage. When confronted with disturbing or potentially illegal content, workers have no mechanism to flag it for ethical review without risking their jobs.
With Meta reporting over 7 million smart glasses sold in 2025 alone, the proprietary AI models driving this hardware are entirely dependent on the Kenyan workforce. Yet these workers receive none of the product’s $299+ retail value.
Meta’s subcontractor, Sama, is already notorious for the psychological damage inflicted on its workforce; over 140 former Sama content moderators in Kenya previously sued after developing PTSD from reviewing graphic Facebook content. Now, smart glasses footage introduces a new psychological burden: the guilt of voyeurism. Workers report immense discomfort at watching people who clearly do not know they are being recorded, fundamentally altering their own trust in modern technology.
The Wearables Industry — The Consent Architecture Problem
Meta markets the Ray-Ban glasses as ‘designed for privacy,’ pointing to an LED recording indicator light, yet it has been widely documented that this light can easily be obscured with tape or a marker to enable covert recording.
Moreover, smart glasses passively capture continuous footage across the most sensitive environments, including bathrooms, bedrooms, and medical offices, with no technical safeguards preventing recording in such spaces.
The NASA Attention Economy: A Multibillion-Dollar ‘Unaccounted Subsidy’
Meta’s privacy policy and AI Terms of Service state: “In some cases, Meta will review your interactions with AIs… and this review may be automated or manual (human).”
This disclosure is buried in legalese, and the users are not told at setup that ‘manual review’ means a contractor in Kenya might watch their spouse walk out of the shower.
Competitors like Apple (with the Vision Pro) have stressed ‘on-device processing’ to minimize data leaving the hardware, in contrast with Meta’s data-harvesting business model. However, the entire wearable AI industry remains opaque about the exact geographical flow of human-in-the-loop (HITL) training data.
The Economic Architecture — Following The Money
By routing data processing to lower-income countries, major technology corporations bypass the stringent labour protections and privacy regulations of the developed world.
If the true human and privacy costs were factored in — including paying Western-equivalent wages for data annotation and implementing fully secure on-device processing — the retail price of the device would likely be three to four times higher, destroying its mass-market viability, which is projected to grow to $15 billion in the next 10 years.
The Regulatory & Legal Dimension
Kenya’s Office of the Data Protection Commissioner (ODPC) has historically been assertive but remains severely under-resourced relative to a trillion-dollar technology corporation.
However, if a ruling is passed against the IT giant, it would not only trigger further investigations in the developed world but also erode consumer confidence — potentially denting a nearly $3 billion smart wearable glasses market.
On the other hand, it would force the IT giant to pull out from Kenya, which would raise the chances of losing thousands of jobs in the short term, but a potential catalyst for economic upgrading in the long term.
Kenya has the infrastructure to move up the value chain from exploitative, low-margin data labeling to higher-skilled AI development and ethical data auditing, provided international frameworks demand fair-trade data sourcing.
The Future — Where Does This Lead?
The immediate technical solution is federated learning, where AI models are trained locally on the device, and only mathematically encrypted model updates — not raw video — are transmitted to the cloud. Separately, synthetic data generation can simulate training scenarios without involving any human-captured footage. However, these methods are currently more expensive and computationally intensive than simply exploiting cheap human labor.
Fashion brands act as the “social license” for wearable tech. Big Tech needs high fashion to normalize its hardware far more than high fashion needs Big Tech. If the public decides these devices are creepy, fashion brands will distance themselves to protect their own prestige, leaving the tech industry holding a profoundly unpopular piece of hardware and an unresolved ethical debt.

