Katie Miller Criticizes OpenAI for Outsourcing Content Moderation to Low-Paid Workers in Kenya
Katie Miller pointed out that OpenAI has outsourced the content moderation and toxicity labeling work for ChatGPT to a company in Kenya.
Kenyan workers earn less than $2 per hour after tax, while OpenAI claims that developing AGI is for the welfare of all humanity.
Market mechanisms show that AI giants continue to source labeling services from low-cost outsourcing chains, with event-driven funding flowing from Silicon Valley to the African data labor market. The beneficiaries are the intermediary outsourcing companies, while low-skilled Kenyan workers face pressure and are exposed to traumatic content.
Source: Public Information
ABAB AI Insight
OpenAI previously outsourced toxicity data labeling in Kenya through Sama (formerly SamaSource), a project that ran from 2021 to 2022, where workers had to process extreme content including violence and sexual abuse daily. Sama charged OpenAI $12.50 per hour, but workers received only $1.32 to $2.
The capital path externalizes the costs of model safety alignment by mobilizing global low-cost labor resources through contractors, motivated by the need to rapidly iterate ChatGPT to capture market windows while compressing non-core labor expenses. Historically, similar practices have been adopted by Meta and Google to scale training data using the same Kenyan/African labeling chains.
Similar cases include Facebook's content moderation outsourcing to the Philippines and India in the 2010s, which led to collective lawsuits from workers suffering PTSD. OpenAI is currently transitioning from early laboratory expansion to commercial platform control, relying on an invisible global labor network to support the superficial narrative of "safe AGI."
Essentially, this represents a restructuring of the industry chain: the "dirty work" of AI training is shifting from high-paid American engineers to low-paid labor in developing countries, with the mechanism being capital chasing the lowest marginal cost of "human filters," transferring the negative externalities of safety alignment to vulnerable labor forces, thereby allowing a few core developers and investors to capture the commercial benefits of the model.