27 Comments
Apr 23, 2023Liked by Melanie Mitchell

In a survey with only 17% response rate, you have a self-selected group that is not representative of the whole. That self-selection bias casts doubt on any results. Real surveys with that low of a response rate endeavor to hunt down a follow up group from those who did not respond to get an idea of how bad the self-selection bias is in their returns. This poll was conducted by people who know very little about conducting a survey. No reputable journal would publish this.

Expand full comment
Apr 29, 2023Liked by Melanie Mitchell

Apparently 559 participants responded to a similar question? Quoting from https://twitter.com/aijaynes/status/1649638918306791424

"""

I think 559 answered the question, so 559/738 ~= 75.7% of the 738 survey respondents, and 559/4271 ~= 13.1% of the experts contacted.

...

These 559 researchers gave probabilities for the impacts of advanced AI being "extremely good (e.g. rapid growth in human flourishing," "on balance good," "more or less neutral," "on balance bad," or "extremely bad (e.g. human extinction)."

For "extremely bad," the median was 5% and the mean was ~14.1%.

- 50/559 ~= 8.9% placed the probability at 50% or higher

- 151/559 ~= 27.0% said 20% or higher

- 266/559 ~= 47.6% said 10% or higher

- 373/559 ~= 66.7% said 5% or higher

- 420/559 ~= 75.1% said 1% or higher

To gain clarity on participants' responses, AI Impacts "also asked a subset of participants one of the following questions":

(1) What probability do you put on future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species?

(2) What probability do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species?

Median response to (1) was 5%, median response to (2) was 10%.

149 people responded to (1) and 162 people responded to (2), which was the source of the confusion at the start of this thread!

- 66/149 ~= 44.3% said 10% or higher for (1)

- 90/162 ~= 55.6% said 10% or higher for (2)

"Different random subsets of respondents" received (1) and (2), so noise might explain why (2) got higher probabilities than (1), even though (2) is basically a more specific version of (1) (and hence should have lower probability).

I computed all this in the anonymous dataset from AI Impacts article https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/. The relevant columns were CZ (viral 48% stat), MX (149 respondents), and MY (162 respondents).

All of this was somewhat confusing to figure out. Given that the 48% stat has entered mainstream media (and journalists are presenting it in misleading ways), I think @AIImpacts should publish an article clarifying this stat.

"""

Expand full comment
author

Thanks. The "MY" column stat (with median "10% Probability") is the question that NY Times and other news venues were specifically quoting, which is why I focused on that question.

Expand full comment
Apr 23, 2023·edited Apr 23, 2023Liked by Melanie Mitchell

Great write up. I destest hype of any sort, whether it is for a tech or against it. So, I love. this kind of sober analysis.

My reaction: Does this statement, even without doing any empirical analysis in the first place -- "Half of those surveyed stated that there was a 10 percent or greater chance of human extinction... from future A.I. systems" -- does it mean anything at all? What does '10% chance' mean, for instance? What does "future AI systems" mean? If we were to replace "AI" with

Software," would the survey have been interesting to anyone (even though AI is software, and soon most software will be AI-based, if not outright AI-writtten)?

In any case, let's do expect the Ezra Kleins of this world to make a LOT of noise about "AI" -- whatever they take the term to mean.

Expand full comment

I can respond at length if you want, but the sketch of my argument would be as follows:

Note, I'm biased because I'm friends with one of the people at AI impcts.

- There were two big questions at the end of the survey and people were shown one of each, hence reponses fell

- These answers were in line with other answers in the survey

- I agree that this doesn't imply that 50% of AI researchers believe this, but I think it should update in that direction

- The last survey was highly cited here, this is a more recent one. I imagine it could be written in a journal if the authors wanted to https://scholar.google.co.uk/citations?view_op=view_citation&hl=en&user=PUQJdUsAAAAJ&citation_for_view=PUQJdUsAAAAJ:9yKSN-GCB0IC

- To me the question is, how does this cause us guess AI researchers think *and* is it fair to say that 50% of researchers think this.

1) I think this survey should cause us to believe that many AI researchers put some chance on existential risk. It seems very unlikely that it's tiny group who think this is a relevant fear

2) I don't love the "50% of researchers think this" but that is the language we'd use if it was climate change. I don't know what the right set of words is perhaps "50% of AI researchers who responded thought that"

Expand full comment
May 15, 2023·edited May 15, 2023

Sorry Nathan and Max, I am intruding in your discussion, but I had not seen the end of Nathan’s Apr26 message: “that is the language we'd use if it was climate change”

We just exchanged (not full agreement !) with Max about a similar topic here : https://aiguide.substack.com/p/do-half-of-ai-researchers-believe/comment/15122459?r=1zp1st&utm_medium=ios

I add some words :

Maybe “x% of researchers think there is y% probability that Climate change will eradicate humanity”, but the main Climate topic is not this one : mean temperature rise of 2-4 degrees SHOULD not eradicate humanity, but WILL bring unprecedented worldwide crisis, unpleasant for many if not all human beings, not talking about other living species-, and this statement is not from a questionable single survey but from the huge and strict multi domains worldwide research consensus mechanism of IPCC, about both the trend (many tentative trajectories, none reassuring to my opinion) and the needed actions for:

1- landing at +2 rather than +6

2- reducing as much as possible the impacts

Coming back to AI risks (not only the end of mankind), what about pushing for an international "governance" (UN based? extended from the EU AI act ? some output of the recent "letter" ? ...) mainly providing guidance and rules, sort of extention of GDPR, organizing multidisciplinary brainstorming about AI oportunities & threats, and even periodic serious surveys to refine our 50% and 10% :-) and analyse the trends !

=> does the embryo of such an international organization already exist?

Expand full comment

"the main Climate topic is not this one : mean temperature rise of 2-4 degrees SHOULD not eradicate humanity, but WILL bring unprecedented worldwide crisis" but that is NOT what the authors of the 97% paper were counting as positive hits.

Expand full comment

"I don't love the "50% of researchers think this" but that is the language we'd use if it was climate change." That is NOT a good reason, considering the extreme alarmism bias in climate reporting.

Expand full comment

Yeah I don't think it's a reason, but it is a rebuttal of the criticism that this survey received compared to similar statements. I suggest it's less about the survey and more about the conclusions.

Expand full comment

More like "50% of AI researchers responding to a survey as part of a self-selected group and so not statistically representative of anything other than the responding group thought that there was at least a 10% chance."

That said, it's disturbing in some other ways, like people working in an area where they think what they do might kill off humanity.

Expand full comment

Meh, the real question is "what do AI researchers believe". That's all that matters. And with these results I'd struggle to believe the median researcher gives it less than 1%

Expand full comment
Apr 23, 2023Liked by Melanie Mitchell

Thanks a lot, it's frightening how a single and weak "piece of information" (should we say "information" ?) like that can be reused many times giving the impression of sort of "wide consensus" !

What about organizing a serious survey? (and repeat it, Vox article claims "AI experts are increasingly afraid of what they’re creating" !)

Could be part of some governance initiative (EU AI act for instance, or even some output of the recent "letter" ...) ?

Not going down to the twisted analogy with IPCC (researchers were afraid about climate crisis (and they were right !), warned, and ... UN built through IPCC sort of "research consensual voice" about the risks, the actions (for GHG reduction and for coping with effects)), because it will either reduce importance of climate crisis in people minds, or grow unnecessary concerns about the "AI leading to the end of humanity"...

Expand full comment

They were afraid about climate crisis and were wrong. The IPCC report is flawed but contains a fair amount of good science. The Executive Summary for policymakers does not accurately reflect the main report and is far more alarmist. Media reports on the executive summary are even further from the science.

For a different perspective, see the many excellent pieces by Roger Pielke, and the CLINTEL report:

https://rogerpielkejr.substack.com/

https://clintel.org/download-ipcc-book-report-2023/

Expand full comment

Ahhh…

I'm going to stay measured, and this thread isn't about global warming, but I can't stay silent because "qui ne dit mot consent"!:

1) the summaries of the IPCC report, involving the representatives of the states, are logically VERY WATERED DOWN compared to the detailed reports developing the messages of the researchers, contrary to what you assert. I have heard several presentations on this consensus mechanism, resulting for example in the replacement of a recommendation for more "plant based food" by "changes towards less resource-intensive diets", clear but too disturbing messages being in the best cases relegated to footnotes, and replaced with interpretable jargon in body text.

2) I just opened the summary of the document you advise and I read (pp16-17) "In this report we have shown that many of the important claims of the IPCC – i.e., that current warming is unprecedented, that it is 100% caused by humans, that it is dangerous – are all questionable”. If I dared, I would attempt an analogy (one more!) with Melanie Mitchell’s analysis of the value of the famous 50%…10% statistic, by asking how many interns and PhD students are co-signatories of this paper, but I don't want to argue or continue this exchange, we just disagree.

I will rather hurry to read Melanie’s new publication about Abstraction and Reasoning Corpus…

Expand full comment

The summary (even more than the main reports) and many other papers continue to use the extremely implausible 8.5 scenario. That's because it's good for scaring people, selling papers, extracting more research money, etc.

You might read some of Roger Pielke's blogs. He knows this stuff as well as anyone and has contributed to the IPCC until dissenting voices were... discouraged.

Expand full comment

This is so helpful. I'm not a scientist, but I have an analytical mind - and if you ask me a question like that, I won't rule it out. 10% - if I just have to guess at a probability of something - would certainly be my own way of saying "I'm not going to rule it out," but beyond that it wouldn't mean much. I'm so grateful for this further breakdown into the actual study. 🙏

Expand full comment

Thank you for this research and post ... am sure I will appending the URL and/or your tweet when I see this stat lazily repeated in other fora.

Expand full comment

Brilliant, thanks for posting.

Expand full comment
Apr 23, 2023Liked by Melanie Mitchell

Thanks, Prof Mitchell, this is useful. Folks like Prof Emily Bender are more likely to hone in on the rhetorical purpose behind the form of the question, like omitting time ranges, as you noted. Or even including such a badly-formukated question in a survey at all.

Expand full comment

One thing all people should know about AI is that it is very, very stupid! It can sound deceptively convincing, but it's genuinely infinitely dumb. Having said that, is there a chance that the end of human civilization will be triggered by AI? Probably, but if that ever happens, it will be because some gullible humans thought it was a good idea to put AI in the driver's seat.

There is amazing potential for AI as a "copilot" (hence what Microsoft is currently doing). I just don't think it's a good idea to try to develop AI systems to be used as "the pilot".

Expand full comment

I would have to agree that the statistic itself is less compelling than it may have looked at first glance.

But the fact that the founder of the field (Geoffrey Hinton) and many other luminaries are so concerned might be reason by itself for alarm.

It's an important enough issue that someone should do a proper survey.

Expand full comment

Prof. Mitchell. Quick question: did everyone who got this question as part of their survey had to submit a response to it? in other words, could it be that the actual number of folks who received this particular question is larger than 162 and only 162 submitted answers to it. A lot of my colleagues at Accenture do send out a lot of surveys but they don't force a response to every single question. At any rate, thanks a lot surfacing these for us!

Expand full comment
author

I don't know -- the AI Impacts post on this survey did not say whether there were more than 162 people who got this question. I'd guess yes -- it's not likely that everyone who received the question chose to answer it.

Expand full comment

Exactly, so the denominator for that probability estimate should be strictly larger than 162. Meaning the statement “50% of researchers” is not only flawed with factors you mention such as in sampling bias, etc but could also be a simple miscalculation as well (not considering all possible values of the random variable in question). in fact it could be closer to 10% [of researchers] (if all the 700 survey takers could answer this particular question and many chose not to).

Expand full comment

I’m surprised more comments aren’t on the funding source coming from FTX, that alone says a lot, considering their reputation and lacking any due diligence...

Expand full comment

AI is probably a continuum and there still is no really universally accepted definition. The kinds of problems that really are worrisome come from human engineering, like the 737-MAX having no engineered kill switch that the pilot could get to. It will always be possible up to the apocalypse at least to use technology irresponsibly.

Expand full comment

Thank you!

Expand full comment