News
Journalist Karen Hao warns against the ‘empires of AI’ and their impact

Journalist Karen Hao discusses her investigative reporting on AI and its impacts. (Jun Ru Chen | Contributing Photographer)
Journalist and author Karen Hao urged the WashU community to increase its awareness around the artificial intelligence (AI) industry, Silicon Valley’s actions, and the negative impact of both on society and the environment.
Hao’s New York Times bestselling book, “Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI,” explores the inner workings and impact of the AI industry and its consequences for the world. Hao, who once worked in Silicon Valley, has written for numerous prestigious outlets about AI and accountability. She said that she was the first journalist to profile OpenAI. She leads the Pulitzer Center’s AI Spotlight Series, training journalists on how to critically cover AI and its impact.
The event was hosted by the Weidenbaum Center on the Economy, Government, and Public Policy in partnership with the Pulitzer Center, Oct. 16. The conversation was led by Elizabeth Pippert Larson, the Associate Director of the Weidenbaum Center.
After the event, some attendees were surprised by Hao’s critical views and the depth of her reporting on AI.
Senior Kate Smith said that Hao’s criticism of OpenAI will make her “a lot more cautious” when she uses ChatGPT and other AI chatbots.
First-year Alden St. John said he became more mindful of AI’s environmental impacts after hearing from Hao and was more open to other solutions.
“I felt encouraged by Hao’s idea on how we can develop AI in a way that is beneficial to society with the more curated, task-specific models,” St. John said.
Hao said that she had interviewed over 250 people for her book, including more than 90 current and former OpenAI employees and executives. Pippert Larson called the book a “chilling and deeply reported investigation” exploring the AI industry’s inner workings and impact.
When asked about how universities like WashU should approach AI, Hao responded that institutions should not put technology before their own goals and that schools should consult the wider community to make these decisions.
“I would say to universities, but to any organization … remember what your original purpose is, and AI, however you use AI, should be in service of that,” Hao said. “And if AI ultimately is undermining that project, then there’s a question about whether we should be using it.”
Hao also advocated for more open discussions and experimentation with AI in university classrooms. She expressed concern about AI’s impact on the job market and cited a Stanford University study that found professions exposed to AI automation have seen a 13% decline in employment.
She gave advice to students wondering how to approach AI in their future professions and said that students should lean into areas that separate them from AI’s capabilities.
“AI is already automating away a bunch of things that don’t require creativity, that don’t require critical thinking, that don’t require a unique voice or a unique perspective, and so you should be leaning into [how] college is the best time to find your unique identity and what your unique perspective is, and that’s something that’s irreplaceable,” Hao said.
Hao also said that she does not use AI at all in her personal or professional life.
Hao called AI an “ill-defined concept” that is not solely relegated to ChatGPT and explained that the technology varies in how it is designed and implemented for various purposes.
Hao also explained the definition of AGI (artificial general intelligence) as an AI at the level of human intelligence. She called the idea a “theoretical notion” that may not be achievable but has been the ultimate goal of Silicon Valley’s efforts.
“This is an endeavor that we should be extremely critical of, because, ultimately, under the banner of trying to reach AGI, essentially, the companies have just found the perfect cover for a consolidating an extraordinary amount of unprecedented economic and political power, unprecedented land, energy, water, data, resources and are undermining many pillars of democracy,” Hao said.
Hao highlighted the amount of energy these large systems and data centers require and said that the resulting rise in energy demand has boosted the fossil fuel industry. In the long term, she predicts this rise in demand will not be environmentally sustainable. She also explained that data centers consume fresh water for cooling, which hurts water-scarce areas.
Hao attempted to debunk what she deemed the “myth” pushed by Silicon Valley — that AI can simply learn how to function on its own — and instead, explained the human costs of building AI models like ChatGPT. These models require people to show the models how to engage in dialogue and develop patterns.
Additionally, Hao said that AI models like ChatGPT often require content moderation due to their drawing on the internet. Therefore, humans are required to create a content moderation filter to block “grotesque” internet content from reaching users.
Hao recounted visiting Kenya to meet some of the workers OpenAI contracted to build its content moderation filter. These employees worked long hours and were paid low wages to sift through disturbing material, which she said left many of them “psychologically devastated.”
Hao also described seeing the development of “quasi-religious movements” around AI in Silicon Valley, which view the stakes of the technology’s development as the difference between humanity going to “AI heaven versus going to AI hell.”
“There are people at [OpenAI], and there are people in the broader AI industry that believe that they are ultimately building something akin to an AI God, and that when they achieve this kind of AI God, it is going to be cataclysmically transformative for civilization, and that if they do not build it correctly, if there’s just even one small mistake along the way, it could instead turn into an AI demon,” Hao said.
Hao also explored the personal life and character of Sam Altman, the CEO of OpenAI, who grew up in St. Louis. She criticized his outsized influence over AI development and said that Altman had consistently lied to executives and employees, which led to his temporary firing in 2023.
Hao further criticized Altman’s lack of values and beliefs, calling him and other tech tycoons “self-delusional.” She pointed out his longtime support of Democratic candidates and his consideration of a California gubernatorial campaign before aligning with President Donald Trump after his 2024 reelection.
Hao asserted that Silicon Valley’s pursuit of AI “challenges democracy” by making tech companies more powerful than nations.
“Every single person [who] first encountered the ‘empire of AI’ felt the same feeling, which is, I have no more agency to self-determine my future,” Hao said. “How can I possibly have agency when my labor is being exploited and I’m being paid less than $2 an hour? How can I possibly have agency when my fresh water is being taken?”
In contrast to general-purpose models like ChatGPT, Hao proposed investing more in task-specific AI systems that she said would be more effective and sustainable and provide “specific, automated solutions” to societal problems without requiring content moderation or large supercomputers. One example she gave was Google DeepMind’s AlphaFold, a system to predict protein folding that won the Nobel Prize in chemistry in 2024.
At the core of Hao’s message to attendees was her call to action for people to resist the actions of the AI industry when its goals contradict the well-being of communities like WashU.
“At the local level, continue asking questions, showing up, protesting when you do not like what you see, and continue electing people at the federal level that are willing to hold these companies accountable,” Hao said.