Skip to main content

AI Causes Real Harm. Let’s Focus on That over the End-of-Humanity Hype

Effective regulation of AI needs grounded science that investigates real harms, not glorified press releases about existential risks

Illustration of people walking and their faces being recognized by AI.

Hannah Perry

Wrongful Arrests, an expanding surveillance dragnet, defamation and deepfake pornography are all existing dangers of the so-called artificial-intelligence tools currently on the market. These issues, and not the imagined potential to wipe out humanity, are the real threat of artificial intelligence.

End-of-days hype surrounds many AI firms, but their technology already enables myriad harms, including routine discrimination in housing, criminal justice and health care, as well as the spread of hate speech and misinformation in non-English languages. Algorithmic management programs subject workers to run-of-the-mill wage theft, and these programs are becoming more prevalent.

Nevertheless, in 2023 the nonprofit Center for AI Safety released a statement—co-signed by hundreds of industry leaders—warning of "the risk of extinction from AI," which it asserted was akin to the threats of nuclear war and pandemics. Sam Altman, embattled CEO of Open AI, the company behind the popular language-learning model ChatGPT, had previously alluded to such a risk in a congressional hearing, suggesting that generative AI tools could go "quite wrong." In the summer of 2023 executives from AI companies met with President Joe Biden and made several toothless voluntary commitments to curtail "the most significant sources of AI risks," hinting at theoretical apocalyptic threats instead of emphasizing real ones. Corporate AI labs justify this kind of posturing with pseudoscientific research reports that misdirect regulatory attention to imaginary scenarios and use fearmongering terminology such as "existential risk."


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


The broader public and regulatory agencies must not fall for this maneuver. Rather we should look to scholars and activists who practice peer review and have pushed back on AI hype in an attempt to understand its detrimental effects here and now.

Because the term "AI" is ambiguous, having clear discussions about it is difficult. In one sense, it is the name of a subfield of computer science. In another it can refer to the computing techniques developed in that subfield, most of which are now focused on pattern matching based on large data sets and the generation of new media based on those patterns. And in marketing copy and start-up pitch decks, the term "AI" serves as magic fairy dust that will supercharge your business.

Since OpenAI's release of ChatGPT in late 2022 (and Microsoft's incorporation of the tool into its Bing search engine), text-synthesis machines have emerged as the most prominent AI systems. Large language models such as ChatGPT extrude remarkably fluent and coherent-seeming text but have no understanding of what the text means, let alone the ability to reason. (To suggest otherwise is to impute comprehension where there is none, something done purely on faith by AI boosters.) These systems are the equivalent of enormous Magic 8 Balls that we can play with by framing the prompts we send them as questions and interpreting their output as answers.

Unfortunately, that output can seem so plausible that without a clear indication of its synthetic origins, it becomes a noxious and insidious pollutant of our information ecosystem. Not only do we risk mistaking synthetic text for reliable information, but that noninformation reflects and amplifies the biases encoded in AI training data—in the case of large language models, every kind of bigotry found on the Internet. Moreover, the synthetic text sounds authoritative despite its lack of citation of real sources. The longer this synthetic text spill continues, the worse off we are because it gets harder to find trustworthy sources and harder to trust them when we do.

The people selling this technology propose that text-synthesis machines could fix various holes in our social fabric: the shortage of teachers in K–12 education, the inaccessibility of health care for low-income people and the dearth of legal aid for people who cannot afford lawyers, to name just a few.

But deployment of this technology actually hurts workers. For one thing, the systems rely on enormous amounts of training data that are stolen without compensation from the artists and authors who created them. In addition, the task of labeling data to create "guardrails" intended to prevent an AI system's most toxic output from being released is repetitive and often traumatic labor carried out by gig workers and contractors, people locked in a global race to the bottom in terms of their pay and working conditions. What is more, employers are looking to cut costs by leveraging automation, laying off people from previously stable jobs and then hiring them back as lower-paid workers to correct the output of the automated systems. This scenario motivated the recent actors' and writers' strikes in Hollywood, where grotesquely overpaid moguls have schemed to buy eternal rights to use AI replacements of actors for the price of a day's work and, on a gig basis, hire writers piecemeal to revise the incoherent scripts churned out by AI.

AI-related policy must be science-driven and built on relevant research, but too many AI publications come from corporate labs or from academic groups that receive disproportionate industry funding. Many of these publications are based on junk science—it is nonreproducible, hides behind trade secrecy, is full of hype, and uses evaluation methods that do not measure what they purport to measure.

Recent examples include a 155-page preprint paper entitled "Sparks of Artificial General Intelligence: Early Experiments with GPT-4" from Microsoft Research, which claims to find "intelligence" in the output of GPT-4, one of OpenAI's text-synthesis machines. Then there are OpenAI's own technical reports on GPT-4, which claim, among other things, that OpenAI systems have the ability to solve new problems that are not found in their training data. No one can test these claims because OpenAI refuses to provide access to, or even a description of, those data. Meanwhile "AI doomers" cite this junk science in their efforts to focus the world's attention on the fantasy of all-powerful machines possibly going rogue and destroying humanity.

We urge policymakers to draw on solid scholarship that investigates the harms and risks of AI, as well as the harms caused by delegating authority to automated systems, which include the disempowerment of the poor and the intensification of policing against Black and Indigenous families. Solid research in this domain—including social science and theory building—and solid policy based on that research will keep the focus on not using this technology to hurt people.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.

Alex Hanna is director of research at the Distributed AI Research Institute. She focuses on the labor building the data underlying AI systems and how these data exacerbate existing racial, gender and class inequality.

More by Alex Hanna

Emily M. Bender is a professor of linguistics at the University of Washington. She specializes in computational linguistics and the societal impact of language technology.

More by Emily M. Bender
Scientific American Magazine Vol 330 Issue 2This article was originally published with the title “Theoretical AI Harms Are a Distraction” in Scientific American Magazine Vol. 330 No. 2 (), p. 69
doi:10.1038/scientificamerican0224-69