The Dark Side of the AI Revolution Nobody Talks About
You’ve heard the story of ChatGPT. The overnight sensation, the billion-dollar valuation, the technology that changed everything. What you probably haven’t heard is what it cost — and who paid.
Karen Hao spent years reporting from inside the AI industry for the Wall Street Journal and MIT Technology Review. Her book Empire of AI isn’t a tech thriller about rogue robots. It’s something more unsettling: a meticulous account of how Silicon Valley’s AI boom is built on the same extractive logic as every empire that came before it — colonial labor, plundered resources, and a narrative of progress that conveniently ignores who gets left behind.
Here are the most counter-intuitive, disturbing, and important things the book reveals.
The Workers Who Made ChatGPT “Safe” Were Paid Less Than $4 an Hour to Read Child Abuse Descriptions
Before ChatGPT launched, OpenAI needed to build a content-moderation filter to prevent the model from spewing horrific outputs. To train that filter, someone had to read hundreds of thousands of grotesque text descriptions — rape, child abuse, bestiality — and sort them by severity.
That someone was Mophat Okinyi, a 28-year-old man in Nairobi, Kenya, hired through an outsourcing firm called Sama for between 3.74 an hour.
Okinyi had no idea he was working for OpenAI. He had no idea what the project was ultimately for. What he knew was that the posts were getting longer, more vivid, and that his colleagues were beginning to crack. He withdrew from friends, stopped being intimate with his wife. When she finally left him, taking her daughter with her, Okinyi sought psychological counseling — which cost more than a full day’s pay.
When I wrote the story for the Wall Street Journal, OpenAI said it wasn’t their fault; they hadn’t known Sama’s workers were struggling. But Hao shows this isn’t naïveté. It’s structure. Outsourcing exists precisely to keep the dirtiest work out of sight and out of liability.
“Workers are squeezed twice — once each to pad the profit margins of the middleman and the AI company.” — Mercy Mutemi, lawyer for the Kenyan workers
The AI Industry Deliberately Seeks Out Populations in Crisis
The data annotation industry — the army of hidden workers who label and rank AI outputs — didn’t stumble into exploiting desperate people by accident. It developed a playbook.
When Venezuela collapsed into hyperinflation in 2016, workers flooded onto data annotation platforms, willing to work for almost nothing. One company, Scale AI, watched this happen and systematized it. They called it “the formula”: find populations with high education, good internet, and acute economic desperation. Offer high wages to attract workers. Then quietly throttle those wages once you’ve captured the market.
Scale ran the playbook in Venezuela, then Kenya, then Nigeria, Pakistan, and North Africa. When workers tried to organize, accounts were terminated. When a group in North Africa protested a pay cut of more than a third, Scale threatened to ban anyone engaging in “revolutions and protests.” Nearly all of them were eventually removed from the platform anyway.
The book’s most chilling observation: this isn’t incidental to the AI boom. It’s a defining feature. One of the most consistent ways empires accumulate wealth is by paying as little as possible to the people whose labor makes that wealth possible.
The “Safety” Movement That Shapes AI Policy Was Partly Funded by Fraudulent Crypto Money
Effective altruism — the philosophical movement that gave us the language of “AI timelines,” “p(doom),” and existential risk — became the dominant intellectual framework inside OpenAI, Anthropic, and AI safety circles. It also became the primary lens through which Washington started thinking about AI regulation.
A significant chunk of the money fueling this movement came from Samuel Bankman-Fried, the crypto billionaire now serving a 25-year prison sentence for fraud. SBF credited effective altruism for his entire life philosophy. He poured tens of millions into EA causes, raised its profile immeasurably, and then was convicted of embezzling billions from FTX customers — some of which flowed directly to Anthropic.
Hao isn’t arguing the AI safety concerns are wrong. She’s pointing out something more subtle: the movement that positioned itself as the sober, rational alternative to Silicon Valley hype was itself deeply embedded in Silicon Valley’s money, social networks, and blind spots. EA’s intense insularity — people working with, living with, and dating only other EA people — created exactly the kind of epistemic bubble it claimed to resist.
ChatGPT’s Launch Was an Accident — Nobody Expected It to Work
The team that built ChatGPT didn’t think it would be a big deal. They called it a “low-key research preview.” They provisioned enough server capacity for 100,000 users over the weekend. Some people guessed a few thousand users would try it. Others guessed tens of thousands.
Within five days, it had a million users. Within two months, a hundred million — the fastest consumer app growth in history at the time.
Even internally, people were baffled. GPT-3.5, the model powering ChatGPT, wasn’t dramatically more capable than GPT-3, which had already been available for two years. One former employee put it simply: “It was shocking that people liked it. To all of us, they’d downgraded the thing we’d been using internally and launched it.”
What this reveals is something the industry rarely admits: the extraordinary commercial success of AI chatbots was not the result of a genius product strategy. It was a fluke — a chat interface applied to a model already sitting on a shelf, released almost as an afterthought before the real product (GPT-4) was ready. The companies now building empires on that success are working backwards to explain why it was inevitable.
Training AI Models Is Consuming Water During Historic Droughts — and the Companies Are Hiding It
In Iowa, where Microsoft trained GPT-4, the state was two years into a drought when training began. In a single month of that training, Microsoft’s data centers consumed roughly 11.5 million gallons of water — about 6% of the local water district’s entire supply. GPT-4 trained for three months.
In Arizona, where Microsoft built its next phase of supercomputers, the region was experiencing the worst drought in over a thousand years. Water from the Colorado River — which Arizona and six other states depend on — is at dangerously low levels.
In Uruguay, Google’s proposed data center planned to use two million gallons of drinking water per day, equivalent to the daily consumption of 55,000 people. Uruguay was in the middle of a crisis so severe that Montevideo’s government started mixing contaminated salt water into the city’s drinking supply. Families opened their taps and found brownish, chemical-smelling fluid. Those who couldn’t afford bottled water drank it anyway.
Google didn’t disclose the water figures voluntarily. A researcher sued the environmental ministry under Uruguay’s constitutional right to water — and won.
The companies involved have responded with earnest sustainability commitments and net-zero pledges. What they have not done is slow down.
The Firing and Reinstatement of Sam Altman Had Nothing to Do With AI Safety
The November 2023 board crisis — when OpenAI’s nonprofit board fired Altman, only for him to return five days later backed by nearly the entire company — was widely reported as a battle between idealistic AI safety advocates and commercial pragmatists.
The reality, per Hao’s reporting, is messier. At the center of it was a years-long pattern of behavior by Altman that multiple board members and senior employees describe in similar terms: telling different people what they wanted to hear, making commitments he didn’t keep, cultivating an image of cautious safety-consciousness while aggressively pushing commercialization internally.
One board member described him as someone who “would say one thing to one group and another thing to another group.” Another had documented instances where he appeared to lie — not about AI risk in the abstract, but about specific things in specific conversations with specific people.
The board’s fatal mistake wasn’t caring about safety. It was believing they had the authority to act on what they believed, in a structure where investors, employees, and a $13 billion Microsoft partnership had every reason to ensure Altman’s return. Within days, the board’s attempt at governance collapsed under commercial gravity.
Indigenous Communities in Chile Are Resisting the Data Center Boom in Ways the Industry Doesn’t Want You to See
Chile is the world’s largest copper producer. It produces a third of global lithium. Its land, water, and energy are being consumed at accelerating rates to build the megacampuses that train and run AI models. The communities bearing the cost are not the ones receiving the benefits.
Sonia Ramos, an Indigenous activist in Chile’s Atacama Desert, has been fighting against mining for decades. She watched her region’s flamingos disappear. She watched the mining drain the water from ancient salt flats. Now she watches the same logic arrive in new packaging: data centers, cables, renewable energy projects — all positioned as progress, all requiring the same extraction.
But this part of the story isn’t only tragic. In Santiago, a coalition of activists, architects, and law students have begun filing lawsuits, staging referendums, and forcing negotiations. In Cerrillos, a community group got Google’s data center plan rejected by popular vote. In Uruguay, a researcher’s constitutional lawsuit forced a corporation to disclose water usage it had marked “confidential.”
These movements haven’t stopped the empires. But they are demanding something the industry rarely offers: accountability to the people who live where the infrastructure is built.
What We’re Left With
Empire of AI doesn’t end with a solution. It ends with a question that’s hard to shake: what would it mean to build AI differently?
Not just more efficiently, or more safely in the existential-risk sense — but more justly. In ways that don’t require invisible workers in economic crisis. That don’t consume the drinking water of communities already in drought. That don’t concentrate billions in wealth among a handful of companies while telling everyone this is how the future gets made.
The book’s final image is of communities in Chile, armed with lawyers and architectural students and constitutional arguments, trying to imagine what a data center could look like if it actually answered to the people surrounding it.
It’s a small thing against very large forces. But it’s also, Hao suggests, what accountability looks like when you can’t wait for the empire to reform itself.
The real question isn’t whether AI will change the world. It already has. The question is: whose world does it change, and in whose favor?