Do half of AI researchers believe that there's a 10% chance AI will kill us all?
Fact-checking a widespread claim
A startling claim—about the likelihood of rogue AI causing human extinction—keeps popping up in some of the most widely read news outlets.
The New York Times repeats it over and over. For example, in an opinion piece by Yuval Harari and others:
In 2022, over 700 top academics and researchers behind the leading artificial intelligence companies were asked in a survey about future A.I. risk. Half of those surveyed stated that there was a 10 percent or greater chance of human extinction (or similarly permanent and severe disempowerment) from future A.I. systems.
This was echoed by columnist Ezra Klein and again by the writer David Wallace-Wells. Vox covered it as well:
Nearly half of the smartest people working on AI believe there is a 1 in 10 chance or greater that their life’s work could end up contributing to the annihilation of humanity.
As did the New Yorker in an article by Jaron Lanier:
In a recent poll, half of A.I. scientists agreed that there was at least a ten-per-cent chance that the human race would be destroyed by A.I.
And the writer Laurie Garrett tweeted that she would be losing sleep over this:
Okay, “50% of AI researchers” is a lot of people. It’s hard to estimate how many, but probably well into the high 100s of thousands, or more. Do that many people really fear rogue AI killing us all?
Since such a terrifying claim is getting so much attention, it’s worth doing a bit of fact checking on the actual survey that all these claims are based on.
The survey, called The 2022 Expert Survey on Progress in AI, is an installment of periodic poll of AI researchers run by an organization called AI Impacts, which is associated with several other organizations that study “AI existential risk”. The 2022 survey was funded by the FTX Future Fund, a now defunct grant-giving arm of the now-defunct FTX company, headed by the infamous Sam Bankman-Fried.
The 2022 survey organizers contacted approximately 4271 people who had published papers at two large machine learning conferences—NeurIPS and ICML—in 2021, and asked them to participate in the survey. They got responses from 738 people (17%). Each of these was given a random subset of a long list of questions.
The question that is cited in the media claims I described above is the following:
“What probability do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species?”
While the New York Times OpEd by Harari et al. implied that “over 700 top academics” answered this question, in truth this question received only 162 replies. The median probability in these 162 estimates is 10%. That is, 81 people estimated the probability as 10% or higher. AI Impacts published all the data, and here I plot all 162 replies:
So, do these results indeed support the ubiquitous media claims that are made about them? What do “half of AI researchers” actually believe?
It’s notable that the results of the survey were published on AI Impacts’ website, not in any peer-reviewed journal or conference. I’m not an expert on the science of surveys, but there are many issues that stand out here. First, the question itself is quite vague; it doesn’t give any time period over which the probability should be estimated. Is it asking about the probability over the next 20 years? 100 years? A billion years? We don’t know how the respondents are interpreting the question.
Also, is a sample of 162 people enough to extrapolate to claims about all AI researchers? (Or, specific to the claim, is 81 people enough to extrapolate claims about 50% of AI researchers?) And of course, one has to worry about response bias: do the people who chose to respond to this question have a particular bias that makes them less representative of the whole population they are meant to represent?
Furthermore, who are these respondents? Are they really representative of “AI Experts” or “the smartest people working in AI?” The authors of papers at the two machine learning conferences from which respondents were solicited range from senior scientists and engineers to first year graduate students (and even some undergrads). Is there any bias in the seniority or expertise level of those who chose to respond?
Finally, does the fact that there is such a huge range of probability estimates (as seen in the scatter plot above) indicate anything? Perhaps the variance of opinion indicates that there is not much basis on which to estimate such probabilities? How certain did the respondents feel about their estimates? Were they pretty certain, or just guessing based on little or no confidence?
To summarize: The claim—that half of AI researchers believe that there is at least a 10% chance that rogue AI will kill us all—is based on a survey question that included 162 respondents. All we know about these respondents is that they were authors on papers published at 2021 machine learning conferences. Possible confounding issues are vagueness of the question, small sample size, response bias, confidence of responses, and level of expertise.
I am not convinced by this that the claim—repeated again and again in the popular media—is well-supported. If you’re an expert on how to administer surveys or to interpret their results, I’d love to hear your thought on all this in the comments below!
Many thanks to Nirit Weiss-Blatt for her tweets related to this topic.
In a survey with only 17% response rate, you have a self-selected group that is not representative of the whole. That self-selection bias casts doubt on any results. Real surveys with that low of a response rate endeavor to hunt down a follow up group from those who did not respond to get an idea of how bad the self-selection bias is in their returns. This poll was conducted by people who know very little about conducting a survey. No reputable journal would publish this.
Apparently 559 participants responded to a similar question? Quoting from https://twitter.com/aijaynes/status/1649638918306791424
"""
I think 559 answered the question, so 559/738 ~= 75.7% of the 738 survey respondents, and 559/4271 ~= 13.1% of the experts contacted.
...
These 559 researchers gave probabilities for the impacts of advanced AI being "extremely good (e.g. rapid growth in human flourishing," "on balance good," "more or less neutral," "on balance bad," or "extremely bad (e.g. human extinction)."
For "extremely bad," the median was 5% and the mean was ~14.1%.
- 50/559 ~= 8.9% placed the probability at 50% or higher
- 151/559 ~= 27.0% said 20% or higher
- 266/559 ~= 47.6% said 10% or higher
- 373/559 ~= 66.7% said 5% or higher
- 420/559 ~= 75.1% said 1% or higher
To gain clarity on participants' responses, AI Impacts "also asked a subset of participants one of the following questions":
(1) What probability do you put on future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species?
(2) What probability do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species?
Median response to (1) was 5%, median response to (2) was 10%.
149 people responded to (1) and 162 people responded to (2), which was the source of the confusion at the start of this thread!
- 66/149 ~= 44.3% said 10% or higher for (1)
- 90/162 ~= 55.6% said 10% or higher for (2)
"Different random subsets of respondents" received (1) and (2), so noise might explain why (2) got higher probabilities than (1), even though (2) is basically a more specific version of (1) (and hence should have lower probability).
I computed all this in the anonymous dataset from AI Impacts article https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/. The relevant columns were CZ (viral 48% stat), MX (149 respondents), and MY (162 respondents).
All of this was somewhat confusing to figure out. Given that the 48% stat has entered mainstream media (and journalists are presenting it in misleading ways), I think @AIImpacts should publish an article clarifying this stat.
"""