Artificial intelligence (AI) is often portrayed as a looming threat to humanity, a technology that could surpass human intelligence and learn how to destroy us on its own. But this scenario, popularized by sci-fi movies and some AI pioneers, is missing the point, according to Meredith Whittaker, a prominent AI researcher and president of the Signal Foundation.
Whittaker, who was pushed out of Google in 2019 for organizing employees against the company’s deal with the Pentagon to build machine vision technology for military drones, argues that the real danger of AI is not its potential consciousness, but the corporations that control it and exploit it for profit and power.

In an interview with Fast Company, Whittaker says that AI is not a neutral or objective technology, but a reflection of the values and interests of those who create and deploy it. She cites examples of how AI is used to harm marginalized communities, such as facial recognition systems that target protesters and activists, algorithmic hiring tools that discriminate against women and minorities, and social media platforms that amplify misinformation and hate speech.
Whittaker also challenges the idea that AI ethics can solve these problems, pointing out that ethics are often used as a PR tool or a way to avoid regulation. She says that ethics frameworks are usually vague and voluntary, and do not address the underlying power structures that enable AI harms. She calls for more democratic oversight and accountability of AI systems, as well as more solidarity and collective action from workers and users who are affected by them.
One of Whittaker’s inspirations is Timnit Gebru, a former co-leader of Google’s ethical AI team who was fired in 2020 for writing a paper that criticized Google’s large language models. Gebru’s paper argued that these models are not only environmentally costly, but also encode biases and stereotypes that can harm marginalized groups. Gebru has since co-founded Black in AI, a group that advocates for more diversity and inclusion in the field.
Another voice that Whittaker respects is Kate Crawford, a researcher at Microsoft and NYU who recently published a book titled Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Crawford’s book exposes the hidden costs of AI, such as the extraction of natural resources, the exploitation of human labor, and the erosion of civil liberties. Crawford argues that AI is not an abstract or immaterial technology, but a physical and political one that has profound impacts on society and the environment.
Whittaker believes that these voices are more important than those who warn about AI’s superintelligence or singularity. She says that these scenarios are hypothetical and speculative ,and distract from the real and present harms of AI. She urges people to pay more attention to the corporations that control AI and their agendas, and to resist their attempts to shape the future of humanity.