The Empire of AI Full Reseaarch Report
The Empire of AI
Based on Karen Hao’s investigation, this is the story of how a new form of empire is being built—one of resource extraction, labor exploitation, and unaccountable power.
The Genesis of an Empire
From Altruism to Capped-Profit
OpenAI began in 2015 as a non-profit with a mission to build AGI for all humanity. But the immense cost of its ambition led to a pivotal shift, creating a “capped-profit” entity that fundamentally altered its DNA from open collaboration to fierce competition.
Comparing the core ideologies of OpenAI’s key figures reveals the central conflict between growth and safety.
The Coup of November 2023
The simmering conflict between CEO Sam Altman’s aggressive growth and Chief Scientist Ilya Sutskever’s safety concerns boiled over, leading to Altman’s shocking ouster and rapid reinstatement—a move that cemented his power and the company’s commercial trajectory.
-
Altman Fired
The board, led by Sutskever, fires Altman over a lack of candor.
-
Employee Revolt
Nearly all 770 employees threaten to quit unless Altman is reinstated.
-
Altman Returns
The board capitulates, reinstating Altman and solidifying his control.
The Imperial Doctrine: Scale at all Costs
The AI industry’s dominant strategy isn’t a scientific inevitability, but a choice: build bigger models with more data and more compute. This “scaling hypothesis” creates an insatiable demand for resources.
The Engine of Empire
The 2017 invention of the “Transformer” architecture unlocked the ability to scale models to unprecedented sizes, fueling the arms race.
GPUs required to train GPT-3, a number that continues to skyrocket.
The compute required for large AI models has been doubling far faster than Moore’s Law, creating staggering costs.
The Hidden Global Footprint
The seamless experience of AI is subsidized by a hidden supply chain of human labor and environmental extraction, with costs disproportionately borne by the Global South.
Labor: The “Ghost Work”
To make products like ChatGPT safe, a vast, invisible workforce performs psychologically damaging “data cleaning” for sweatshop wages.
A stark comparison of estimated annual compensation reveals the extreme disparity in the AI value chain.
Environment: The Thirst for Resources
The “cloud” is profoundly physical, consuming immense energy and freshwater, often in resource-scarce regions.
Projected increase in AI’s electricity demand vs. California’s current annual usage within 5 years.
More freshwater a proposed Google data center in Chile would use than the entire local community.
Projected AI industry electricity demand (in Terawatt-hours).
The AI Colonialism Flowchart
1. Resource Extraction (Global)
Mass scraping of public & copyrighted data. Exploitation of low-wage labor. Consumption of local water & energy.
2. Value Concentration (Silicon Valley)
Proprietary models are trained, creating immense IP and market value. Profits are concentrated in a few corporations.
3. Product Deployment (Global North)
Clean, seamless AI products are delivered to consumers and businesses, masking the hidden costs of their creation.
This cycle mirrors historical colonial dynamics: raw materials and labor are extracted from the periphery to create immense wealth and finished goods for the imperial core.
Power, Governance, and the Future
The Threat to Democracy
The concentration of power in a few unelected tech leaders erodes public accountability. The ultimate risk isn’t a robot apocalypse, but citizen apathy in the face of unaccountable power.
“A major gift to the AI industry.”
Karen Hao on a proposed 10-year moratorium on state-level AI regulation, which would “enshrine the impunity of Silicon Valley into law.”
Recommendations for a New Path
To build a more equitable AI future, a new approach is needed, shifting from extraction to accountability.
- Mandate Radical Transparency: Require public disclosure of data sources, energy use, and labor practices.
- Foster a Diverse Ecosystem: Fund research into efficient, smaller-scale, and open-source AI.
- Reclaim Community Ownership: Recognize that data, labor, and natural resources are public goods, not corporate property.
Space Repetition Cards
The Empire of AI
A spaced repetition infographic on the key themes from Karen Hao’s investigation.
OpenAI’s Founding Myth
OpenAI was launched in 2015 as a non-profit with a mission to build AGI for all humanity, positioning itself as a benevolent safeguard against corporate monopolization of AI.
Connection: This altruistic origin story is crucial for understanding the profound ideological shift that occurred with the “Capped-Profit” pivot.
The “Capped-Profit” Pivot
Faced with immense computational costs, OpenAI restructured into a “capped-profit” entity. This hybrid model allowed it to attract massive venture capital, fundamentally altering its DNA from open collaboration to fierce competition.
Connection: This structural change was a direct consequence of adopting the “Scaling Hypothesis” as its core technical doctrine.
The Cult of AGI
Hao describes a quasi-religious internal culture at OpenAI, where employees hold a fervent, all-consuming belief in the world-altering potential of AGI, oscillating between utopian dreams and doomsday fears.
Connection: This intense belief system helps explain extreme actions like the “Effigy Ritual” and the high stakes of the Altman vs. Sutskever conflict.
The Effigy Ritual
At company retreats, Chief Scientist Ilya Sutskever would burn a wooden effigy of a deceitful, unaligned AGI. This symbolized the team’s solemn duty to destroy any unsafe creation, highlighting the cult-like atmosphere.
Connection: This ritual is a visceral manifestation of the “Safety-First” ideology championed by Sutskever, which stands in direct opposition to Altman’s growth focus.
Altman: The Techno-Capitalist
CEO Sam Altman is portrayed as the avatar of aggressive growth and commercialization. His core belief is in achieving “AGI safety through capability”—that is, building the most powerful systems first to control the future.
Connection: His ideology directly led to the November 2023 Coup, where the board’s safety concerns clashed with his relentless drive for progress.
Sutskever: The Safety Priest
Co-founder Ilya Sutskever is the high priest of the “doomer” camp, motivated by a profound fear that unchecked AI development will lead to catastrophe. His focus is on caution and alignment above all else.
Connection: This stance explains his role in attempting to oust Altman, viewing the CEO’s speed as a direct threat to the company’s core safety mission.
The November 2023 Coup
The ideological clash culminated in the board firing Altman over safety and candor concerns. However, a near-total employee revolt forced his reinstatement, cementing his power and the company’s commercial trajectory.
Connection: This event serves as the ultimate proof of the “Threat to Democracy” theme, showing how immense power is consolidated by unelected leaders.
The Scaling Hypothesis
The AI industry’s dominant theory: the best path to more capable AI is to relentlessly scale up models with more data and more compute. Hao argues this was a strategic choice, not a scientific inevitability.
Connection: This hypothesis created the “Insatiable Need for Compute,” which in turn justified the pivot to a for-profit model to raise capital.
Transformers: The Engine
The 2017 invention of the “Transformer” architecture was the technical key that unlocked efficient scaling. It allowed models to process vast sequences of data, fueling the arms race to build bigger and bigger systems.
Connection: The success of Transformers validated the Scaling Hypothesis and led directly to the “Collapse of Reproducibility” as models grew too large to audit.
Collapse of Reproducibility
The sheer scale of training data makes it impossible for other scientists to audit or reproduce experiments, a core tenet of scientific rigor. This opacity becomes a competitive advantage, protecting intellectual property.
Connection: This breakdown of scientific norms reinforces the need for “Radical Transparency” as a key governance recommendation.
The “AI Colonialism” Thesis
Hao’s central argument: the AI industry operates like a modern empire, extracting resources (data, labor, energy) from a global periphery to concentrate wealth and power in a Silicon Valley core.
Connection: This thesis is the framework that connects “Ghost Work” in Kenya with resource extraction in Chile into a single, coherent system.
“Ghost Work” in Kenya
OpenAI contracted workers in Kenya for as little as $2/hour to perform psychologically damaging data labeling. This “ghost work” is essential for making products like ChatGPT safe for public use.
Connection: This is a prime example of the first stage in the “AI Supply Chain Flow”: resource and labor extraction from the Global South.
Environmental Cost: Energy
The AI industry’s electricity demand is projected to skyrocket, potentially equaling 2-6x the annual consumption of California within 5 years. This has led to extending the lives of coal-fired power plants.
Connection: This massive energy use is a key part of the “Hidden Ledger” of AI, a cost externalized from the final product’s price.
Environmental Cost: Water
Data centers consume vast amounts of freshwater for cooling. A proposed Google data center in a water-scarce region of Chile planned to use 1,000 times more water than the entire local community.
Connection: This case study is a powerful example of the “AI Colonialism” thesis, where corporate needs threaten vital community resources.
The AI Supply Chain Flow
A three-stage process: 1) Global resource extraction (data, labor, energy). 2) Value concentration in Silicon Valley. 3) Deployment of clean, finished products to the Global North, masking the hidden costs.
Connection: This flow directly mirrors historical colonial dynamics, forming the backbone of the “AI Colonialism” argument.
Threat to Democracy
The ultimate risk is not a robot apocalypse, but the erosion of citizen agency. When a few unelected leaders hold immense power, public apathy grows, and democracy withers.
Connection: This threat is exacerbated by “Regulatory Capture,” where industry lobbies to prevent oversight and enshrine its own power.
Regulatory Capture
The AI industry actively lobbies for laws that prevent meaningful oversight. Hao highlights a proposed 10-year moratorium on state-level AI regulation as a “major gift” that would enshrine Silicon Valley’s impunity.
Connection: This effort to block local laws makes the push for “Sovereign AI” by other nations an understandable, if risky, response.
The “Sovereign AI” Trap
Nations are encouraged to build their own AI capabilities to escape US dominance. However, this can lead to a new dependency on a single hardware vendor (like NVIDIA) and the same extraction of local data and resources.
Connection: This highlights the need for a truly “Diverse Ecosystem” rather than simply replacing one form of centralized control with another.
Critique: “Conclusion-Driven”
Some critics argue Hao started with the “empire” metaphor and selectively chose evidence to fit it, a form of confirmation bias that ignores the genuine technical dilemmas faced by developers.
Connection: This critique represents a clash of frameworks: viewing AI through a lens of political economy versus one of pure technical evolution.
Critique: “Technical Naivete”
Another critique is that the book dismisses scaling as an inferior method, ignoring the decades of research that made it the dominant paradigm. It’s seen as a misunderstanding of how scientific progress in the field occurred.
Connection: Hao’s counter-argument is that scaling became dominant not because it was the only path, but the path of least resistance for capital-rich firms.
The Geopolitical Boomerang
The extractive practices of the “AI Empire” in the Global South create resentment and an opening for geopolitical rivals, like China, to offer alternative, state-backed AI infrastructure partnerships.
Connection: This demonstrates how the internal “Imperial Doctrine” can have unintended, negative consequences for foreign policy and global influence.
Recommendation: Radical Transparency
A core solution proposed is to mandate public disclosures for the entire AI supply chain, including data sources, energy and water use, and labor practices. This requires independent, third-party auditing.
Connection: This is a direct response to the “Collapse of Reproducibility” and the secrecy that shields companies from accountability.
Recommendation: Diverse Ecosystem
To counter monopolistic trends, public funding should be used to support research into AI efficiency, smaller models, and the creation of high-quality, ethically sourced open-source datasets.
Connection: This addresses the problem of the “Road Not Taken,” aiming to make alternative, less extractive AI development paths viable again.
Recommendation: Community Ownership
The foundational inputs of AI—our collective data, labor, and natural resources—should be recognized as public goods, not corporate property. Communities must reclaim a sense of ownership and control.
Connection: This is the ultimate counter-argument to the “AI Colonialism” thesis, seeking to reverse the flow of extraction and empower the periphery.