Post · · 6 min read

The Supply Chain You Never See

What the UN report on data labellers reveals about the real cost of AI

Every time you use a large language model, an image generator, or a content recommendation algorithm, you are using a product built on human labour that its makers would prefer you never thought about.

That is not a rhetorical flourish. It is a supply chain fact. And a UN News report published on 4 March 2026, drawing on an ILO and ITU webinar, has laid it out with uncomfortable clarity: the people who make AI systems functional describe "extreme pressure, constant monitoring, low wages and mental health harms" as the baseline conditions of their work.

The invisible workforce

Ben Richards of UNI Global Union describes two main groups in this workforce. Content moderators review harmful material so that platforms remain usable. Data labellers and annotators sort, tag, and structure information so that machine learning models can recognise patterns. Without either group, the products do not work.

The World Bank estimates that between 154 million and 435 million people worldwide are engaged in online gig work. A significant proportion of those workers perform data labelling, tagging, transcribing, and content review tasks. They are the human infrastructure beneath the machine.

This is not new. Amazon Mechanical Turk industrialised the practice back in 2005. What has changed is the scale. Training ChatGPT, Gemini, and their competitors requires not only vast datasets but continuous human evaluation of model outputs, a process called reinforcement learning from human feedback (RLHF). As long as models are being refined, this labour continues.

What the work actually involves

The 2023 TIME investigation into OpenAI's arrangement with outsourcing firm Sama remains one of the most detailed accounts of what data labelling can look like at the sharp end. Workers in Nairobi were paid less than $2 per hour to label textual descriptions of sexual abuse, hate speech, child sexual abuse material, torture, and violence. They read between 150 and 250 such passages per nine-hour shift. All four workers interviewed described lasting psychological damage.

Sama cancelled its OpenAI contracts eight months early, citing the traumatic nature of the work. That decision tells you something about how bad the conditions were: a company whose entire business model depends on these contracts walked away from one.

The human rights organisation Equidem documented more than 60 cases of serious mental health harm among content moderators and data labellers across Kenya, Ghana, Colombia, and the Philippines. The catalogue of symptoms is grim: PTSD, depression, anxiety, suicidal ideation, insomnia, chronic migraines, panic attacks, hallucinations, dissociation, and intrusive flashbacks. At least 76 workers reported physical symptoms they believed to be stress-related.

These are not edge cases. They are the predictable consequences of asking people to review hundreds of videos per day containing sexual violence, child abuse material, and recorded deaths.

Designed to diffuse accountability

The structure of the supply chain is not accidental. It is specifically engineered to place distance between the companies whose names you know and the conditions their products depend on.

The chain works like this: a technology firm contracts an outsourcing intermediary, which contracts workers, often as independent contractors rather than employees. Sama received $12.50 per hour from OpenAI while paying Kenyan workers approximately $2 per hour. Scale AI, which relies heavily on Global South labour, has been valued at $14 billion. The gap between what the tech firm pays and what the worker receives is where the intermediary extracts its value, and where accountability dissolves.

This layered structure means that when journalists or researchers ask difficult questions, the tech firm can point to the intermediary. The intermediary can point to local labour law. And the worker, bound by a non-disclosure agreement, often cannot point to anything at all.

NDAs as the keystone

I want to stay on the NDAs for a moment, because I think they are the single most important mechanism in this entire arrangement.

On their face, NDAs protect proprietary data. In practice, they silence workers from discussing their conditions with family members, therapists, or union organisers. Equidem's report found that the majority of workers approached for interview declined to participate. In Colombia, 75 out of 105 workers refused. In Kenya, 68 out of 110. The overwhelming reason was fear of violating their NDAs.

Think about what that means. Workers experiencing PTSD and suicidal ideation from their jobs are contractually prevented from seeking support because doing so might involve describing what they do all day. The NDA is not a side effect of the business model. It is structural. Without silence, the supply chain would face scrutiny it is not designed to survive.

The geography is not random

The concentration of this labour in Kenya, India, the Philippines, Colombia, and Venezuela follows a pattern that should be familiar to anyone who has studied economic history. Dr Ruhi Khan of LSE argues the arrangement "mirrors colonial hierarchies, where data, labour and environmental resources from the Global South sustain AI innovation in the Global North."

Kenya alone has approximately 1.2 million online workers in this sector. For workers in rural areas, particularly women, data annotation roles represent rare income opportunities despite the conditions, because alternatives are scarcer. That lack of alternatives is precisely what makes the labour cheap.

The wage disparities are stark. Workers in Venezuela earn between $0.90 and $2.00 per hour. Kenyan workers earn roughly $1.50 to $2.00 per hour. In the United States, equivalent roles pay $10 to $25. For Chinese tech companies contracting Kenyan workers, some spend approximately 12 hours labelling thousands of short videos for around 700 Kenyan shillings, about $5.42, per session.

These are not different wages for different work. They are different wages for the same work, priced according to the desperation of the labour market.

What happens when workers organise

The response from tech firms to organised labour has been instructive. Meta laid off nearly 300 Kenya-based content moderators after workers formed the African Content Moderators Union. Scale AI banned workers in Kenya, Nigeria, and Pakistan from its Remotasks platform in March 2024, cutting off workers without notice and reportedly leaving unpaid wages outstanding.

In May 2024, nearly 100 Kenyan tech workers wrote an open letter to US President Biden describing their conditions as "modern day slavery." That phrase is theirs, not mine. I include it because the people doing the work have the clearest view of what the work is.

There are signs of progress, though modest ones. UNI Global Union is building a Global Trade Union Alliance of Content Moderators, calling on TikTok, Meta, Alphabet, and OpenAI to implement mental health protections, living wages, long-term employment contracts, and the right to collective bargaining. The outsourcing firm Covalen agreed to review employment contracts following collective action. The EU's Corporate Sustainability Due Diligence Directive is beginning to impose supply chain obligations. These are small steps, but they represent the first time this workforce has had any organised voice at all.

The automation trap

Here is the part that keeps me thinking. Automated annotation tools have already reduced labelling time by 50 percent. The AI market is projected to reach $407 billion by 2027. The workers who built the foundations of that market are now the most likely to be displaced by the tools they trained.

And here is the thing: their displacement will not improve their conditions. It will simply end their involvement. No transition support. No share of the value they created. No acknowledgement that the products could not have been built without them.

What this means for the rest of us

I use AI tools. I write about them on this site. I think genuine progress exists in what these systems can do. But I also think it matters that we understand what they cost, and who pays.

The AI industry's public commitments to ethics, responsibility, and human benefit sit uneasily alongside a supply chain that depends on poverty wages, psychological harm, and enforced silence. These are not contradictions that better PR can resolve. They are structural features of how the products are made.

The question the UN report raises is straightforward: is the way we build AI compatible with the values we claim AI will promote? Right now, the honest answer is no. And the workers who could tell us the most about it are the ones contractually forbidden from speaking.

Related reading