Inside the Mind of AI's Most Powerful CEO
Based on Karen Hao's explosive new book "Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI," which reveals the dark truth behind the AI revolution through over 150 interviews with OpenAI insiders.

In a revealing new book that has sent shockwaves through Silicon Valley, investigative journalist Karen Hao pulls back the curtain on Sam Altman, the enigmatic CEO who transformed OpenAI from a safety-focused nonprofit into what critics are calling a modern-day colonial empire. Through over 150 interviews with current and former OpenAI employees, Hao's "Empire of AI" paints a disturbing portrait of a leader whose charm masks a pattern of manipulation and broken promises.
"You could parachute him into an island full of cannibals and come back in 5 years and he'd be the king," Paul Graham, Altman's longtime mentor and Y Combinator founder, told Hao. It's a quote that captures the essence of what multiple sources describe as Altman's most dangerous asset: an almost supernatural ability to bend people to his will.
But this isn't the story of a typical tech visionary. After conducting extensive interviews with those who worked closest to Altman, Hao discovered something unsettling. "No matter how long someone worked with him or how closely they worked with him, they would always say to me at the end of the day: 'I don't know what Sam believes,'" she revealed in a recent interview.
The reason became clear as Hao dug deeper. "He always said he believed what that person believed," she explained, describing how Altman would tailor his message in one-on-one meetings. "But because I interviewed so many people who have very divergent beliefs... they're literally diametrically opposite."
Get the PPC Land newsletter ✉️ for more like this.
The Religion Builder's Playbook
Perhaps most revealing is a blog post Altman wrote in 2013, years before founding OpenAI, where he quoted the idea that "successful people build companies, more successful people build countries, the most successful people build religions." Altman reflected that "the most successful founders in the world don't actually set off to build a company, they set off to build a religion."
This wasn't idle speculation—it appears to have been his actual strategy. When recruiting OpenAI's first chief scientist, Ilya Sutskever, from Google, Altman didn't compete on salary. Instead, as Hao discovered, "he appealed to Sutskever's sense of purpose—like, do you want a big salary and just to work for a for-profit company, or do you want to take a pay cut and do something big with your life?"
The Great Betrayal
OpenAI's origin story, as told by Altman, positioned the company as the anti-Silicon Valley alternative to Google's AI dominance. Founded in 2015 as a nonprofit with Elon Musk, the organization promised to be "completely open, transparent, and also collaborative to the point of self-sacrificing if necessary," according to Hao's research. They even pledged that "if another lab starts making faster progress than us on AI... we will actually just join up with them."
But this noble mission lasted barely a year. "The moment that they realized we got to go for scale, then everything shifted," Hao explained. The bottleneck had moved from acquiring talent to acquiring the massive capital needed for compute power. "That is also why Elon Musk and Sam Altman ended up having a falling out, because when they started discussing a for-profit conversion, both Elon Musk and Sam Altman each wanted to be the CEO."
The power struggle revealed Altman's true nature. Initially, both Sutskever and co-founder Greg Brockman chose Musk as the better leader. But Altman, displaying what Hao calls "a very classic pattern in his career, became very persuasive to Brockman" about why choosing Musk would be dangerous. After Altman's manipulation campaign succeeded, "Musk leaves in a huff and says I don't want to be part of this anymore."
The Psychology of Control
What emerges from Hao's investigation is a portrait of a leader whose greatest talent—and perhaps greatest danger—lies in his understanding of human psychology. "He really does understand human psychology very well," Hao noted, "which not only is helpful in getting people to join in on his quest... he's good at persuading whoever has access to whatever resource he needs to then give him that resource, whether it's capital, land, energy, water, laws."
This psychological manipulation works on multiple levels. As Hao discovered, "people end up coming out of these personalized meetings feeling totally transformed in the positive direction, being like 'I feel superhuman, I can now do all these things.'" But the dark side is equally powerful: "Other people end up coming out of these meetings feeling like 'was I played?'... 'was he just telling me all these things to try and get me to do something that's actually fundamentally against my values?'"
The Ideology Behind the Empire
What drives Altman isn't just money—it's something more troubling. "You can't actually fully understand it as just a story about money," Hao explained. "It has to also be understood as a story of ideology, because... there are people who genuinely, fervently believe... that we can fundamentally recreate human intelligence, and that if we can do that, there is no other more important thing in the world."
This quasi-religious fervor explains why OpenAI continues its resource-intensive approach even when more efficient alternatives exist. When asked about DeepSeek's demonstration that similar AI capabilities could be achieved with "orders of magnitude less computational resources," Hao noted that American companies show "complete unwillingness" to adopt these techniques. The reason? "When you continue to pursue a scaling approach and you're the only one with all of the AI experts in the world, you persuade people into believing this is the only path, and therefore you continue to monopolize this technology."
The Global Impact
Under Altman's leadership, OpenAI's hunger for resources has created what Hao describes as a colonial extraction system. She interviewed Kenyan workers contracted by OpenAI who were paid "a few dollars an hour" to moderate the "worst text on the internet," including AI-generated extremist content. One worker, Mofat, "was on the sexual content team, his personality totally changed as he was reading child sexual abuse every day," ultimately losing his family when his wife said, "I don't know the man you've become anymore."
Meanwhile, OpenAI's data center expansion follows what Hao calls "the crisis playbook"—targeting economically vulnerable communities that lack the resources to understand the true costs. As one OpenAI employee told her regarding expansion plans: "We're running out of land and water... we're just trying to look at the whole world and see where else we can place these things."
The Question of Leadership
Reading Hao's investigation, one can't help but wonder how someone with Altman's apparent character flaws could build such a successful company. Hao's assessment is nuanced but damning: "He is such a polarizing figure both extreme in the positive and negative direction. Some people feel he is the greatest tech leader of our generation... but they don't say that he is honest when they say that. They just say that he's one of the most phenomenal assets for achieving a vision of the future that they really agree with."
The tragedy, according to Hao, is that this manipulation serves a fundamentally flawed vision. "Different humans will have different blind spots, and if you give a small group of those people too much power to develop technologies that will affect billions of people's lives, inevitably that is structurally unsound."
The Environmental Destruction
Perhaps nowhere is Altman's disregard for consequences more evident than in OpenAI's catastrophic environmental impact. Under his leadership, the company has pioneered what Hao calls an extractive approach to AI development that treats the planet's resources as limitless inputs for his scaling obsession.
"The resource consumption required to develop these models and also use these models is quite extraordinary," Hao explained, detailing research showing that current AI expansion would require adding "half to 1.2 times the amount of energy consumed in the UK annually to the global grid in the next 5 years." Most alarming, as Altman himself admitted in Senate testimony, this massive energy demand "will most probably be natural gas."
The human cost of this expansion is already visible. In Memphis, Tennessee, "Elon Musk's XAI, the giant supercomputer that he built called Colossus... is being powered with around 35 unlicensed methane gas turbines that are pumping thousands of toxic air pollutants into the air into that community." When asked what "unlicensed" means, Hao was blunt: these companies "completely ignore existing environmental regulations when they installed those methane gas turbines."
The Water Wars
Even more insidious is the industry's consumption of fresh water. "These data centers need fresh water to cool, because if they used any other kind of water it would erode, corrode the equipment," Hao discovered. "Most often these data centers actually use public drinking water because when they enter into a community, that is the infrastructure that's already laid."
The consequences are stark. Hao traveled to Montevideo, Uruguay, where "the government literally did not have enough water to put into the public drinking water supply, so they were mixing toxic waste water in just so people could have something come out of their taps." For families too poor to buy bottled water, "that is what they were drinking, and women were having higher rates of miscarriages, elderly were having an exacerbation or inflammation of their chronic diseases."
In the middle of this crisis, "Google proposed to build a data center that would use more drinking water" than the community could spare. As Hao noted with dark irony, "Bloomberg recently had a story that said two-thirds of the data centers now being built for AI development are in fact going into water-scarce areas."
The Invisible Costs
What makes Altman's environmental destruction particularly insidious is how it's hidden from public view. Despite the massive environmental footprint, "they don't actually tell us this," Hao revealed. "Google and Microsoft... do have annual reports where they say how much capital they've spent on data center construction. They do not break down how much of those data centers are being used for AI."
Even when companies do report environmental data, "they also massage that data a lot to make it seem better than it actually is." Yet even with this manipulation, "both Google and Microsoft reported... a 30% and 50% jump in their carbon emissions largely driven by this data center development."
The hypocrisy is staggering. While politicians tout AI as a solution to climate change, Hao discovered that "we're seeing reports of coal plants having their lives extended—they were meant to be retired, but they're no longer being retired explicitly to power data center development."
The Democratic Assault
Under Altman's leadership, OpenAI and its competitors have perfected what Hao calls "hijacking existing laws, existing regulations, existing democratic processes to build the infrastructure for their expansion." The pattern is consistent: "These companies are entering into communities and completely hijacking... democratic processes."
In Arizona, a legislator told Hao: "I didn't know it had to use fresh water," admitting she "would have never voted for having this data center" if she'd understood the environmental cost. But the fix was already in—"there are so few independent experts for these legislators, city council members to consult that the only people that they rely on for the information about what the impact of this is going to be are the companies."
The sales pitch is always the same: "We're going to invest millions of dollars, we're going to create a bunch of construction jobs up front, and it's going to be great for your economy." What they don't mention is the long-term devastation. In the UK, "data center development along the M4 corridor... has literally already led to a ban in construction of new housing in certain communities that desperately need more affordable housing" because "you cannot build new housing when you cannot guarantee deliveries of fresh water or electricity to that housing."
The False Choice
Perhaps most damaging is how Altman has convinced the world that this environmental destruction is necessary for AI progress. Politicians like UK Prime Minister Keir Starmer declare "we want to be AI creators, not AI consumers," believing they must sacrifice housing and environmental protection for technological advancement.
But Hao's research reveals this is "a false trade-off." Before OpenAI's scaling obsession, "the trend within the AI research community was going the opposite direction, towards tiny AI systems." Researchers were developing "AI systems trained on your mobile device... running on your mobile device," requiring minimal resources while delivering powerful capabilities.
"You would realize then you can have housing and you can have AI innovation," Hao explained, "but once again, there's not a lot of independent experts that are actually saying these things. Most AI experts today are employed by these companies," creating a situation "basically equivalent to if most climate scientists were being bankrolled by oil and gas companies."
The Reckoning
As Hao's book gains traction and more former employees speak out, Altman faces growing scrutiny. The carefully constructed narrative of the benevolent AI pioneer is being challenged by detailed reporting that reveals the complex realities behind OpenAI's transformation.
"I went to school with a lot of the people that now build these technologies," Hao reflected. "I don't find these figures to be towering or magical—like, I remember when we were walking around dorm rooms together in our pajamas." This perspective may be crucial for understanding that these are human decisions, not inevitable technological progress.
The question now is whether policymakers, investors, and the public will demand greater transparency and accountability from AI companies, or continue to accept the current trajectory of rapid scaling and resource extraction without democratic oversight.