27 Comments
Apr 23, 2023Liked by Melanie Mitchell

In a survey with only 17% response rate, you have a self-selected group that is not representative of the whole. That self-selection bias casts doubt on any results. Real surveys with that low of a response rate endeavor to hunt down a follow up group from those who did not respond to get an idea of how bad the self-selection bias is in their returns. This poll was conducted by people who know very little about conducting a survey. No reputable journal would publish this.

Expand full comment
Apr 29, 2023Liked by Melanie Mitchell

Apparently 559 participants responded to a similar question? Quoting from https://twitter.com/aijaynes/status/1649638918306791424

"""

I think 559 answered the question, so 559/738 ~= 75.7% of the 738 survey respondents, and 559/4271 ~= 13.1% of the experts contacted.

...

These 559 researchers gave probabilities for the impacts of advanced AI being "extremely good (e.g. rapid growth in human flourishing," "on balance good," "more or less neutral," "on balance bad," or "extremely bad (e.g. human extinction)."

For "extremely bad," the median was 5% and the mean was ~14.1%.

- 50/559 ~= 8.9% placed the probability at 50% or higher

- 151/559 ~= 27.0% said 20% or higher

- 266/559 ~= 47.6% said 10% or higher

- 373/559 ~= 66.7% said 5% or higher

- 420/559 ~= 75.1% said 1% or higher

To gain clarity on participants' responses, AI Impacts "also asked a subset of participants one of the following questions":

(1) What probability do you put on future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species?

(2) What probability do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species?

Median response to (1) was 5%, median response to (2) was 10%.

149 people responded to (1) and 162 people responded to (2), which was the source of the confusion at the start of this thread!

- 66/149 ~= 44.3% said 10% or higher for (1)

- 90/162 ~= 55.6% said 10% or higher for (2)

"Different random subsets of respondents" received (1) and (2), so noise might explain why (2) got higher probabilities than (1), even though (2) is basically a more specific version of (1) (and hence should have lower probability).

I computed all this in the anonymous dataset from AI Impacts article https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/. The relevant columns were CZ (viral 48% stat), MX (149 respondents), and MY (162 respondents).

All of this was somewhat confusing to figure out. Given that the 48% stat has entered mainstream media (and journalists are presenting it in misleading ways), I think @AIImpacts should publish an article clarifying this stat.

"""

Expand full comment
Apr 23, 2023·edited Apr 23, 2023Liked by Melanie Mitchell

Great write up. I destest hype of any sort, whether it is for a tech or against it. So, I love. this kind of sober analysis.

My reaction: Does this statement, even without doing any empirical analysis in the first place -- "Half of those surveyed stated that there was a 10 percent or greater chance of human extinction... from future A.I. systems" -- does it mean anything at all? What does '10% chance' mean, for instance? What does "future AI systems" mean? If we were to replace "AI" with

Software," would the survey have been interesting to anyone (even though AI is software, and soon most software will be AI-based, if not outright AI-writtten)?

In any case, let's do expect the Ezra Kleins of this world to make a LOT of noise about "AI" -- whatever they take the term to mean.

Expand full comment

I can respond at length if you want, but the sketch of my argument would be as follows:

Note, I'm biased because I'm friends with one of the people at AI impcts.

- There were two big questions at the end of the survey and people were shown one of each, hence reponses fell

- These answers were in line with other answers in the survey

- I agree that this doesn't imply that 50% of AI researchers believe this, but I think it should update in that direction

- The last survey was highly cited here, this is a more recent one. I imagine it could be written in a journal if the authors wanted to https://scholar.google.co.uk/citations?view_op=view_citation&hl=en&user=PUQJdUsAAAAJ&citation_for_view=PUQJdUsAAAAJ:9yKSN-GCB0IC

- To me the question is, how does this cause us guess AI researchers think *and* is it fair to say that 50% of researchers think this.

1) I think this survey should cause us to believe that many AI researchers put some chance on existential risk. It seems very unlikely that it's tiny group who think this is a relevant fear

2) I don't love the "50% of researchers think this" but that is the language we'd use if it was climate change. I don't know what the right set of words is perhaps "50% of AI researchers who responded thought that"

Expand full comment
Apr 23, 2023Liked by Melanie Mitchell

Thanks a lot, it's frightening how a single and weak "piece of information" (should we say "information" ?) like that can be reused many times giving the impression of sort of "wide consensus" !

What about organizing a serious survey? (and repeat it, Vox article claims "AI experts are increasingly afraid of what they’re creating" !)

Could be part of some governance initiative (EU AI act for instance, or even some output of the recent "letter" ...) ?

Not going down to the twisted analogy with IPCC (researchers were afraid about climate crisis (and they were right !), warned, and ... UN built through IPCC sort of "research consensual voice" about the risks, the actions (for GHG reduction and for coping with effects)), because it will either reduce importance of climate crisis in people minds, or grow unnecessary concerns about the "AI leading to the end of humanity"...

Expand full comment

This is so helpful. I'm not a scientist, but I have an analytical mind - and if you ask me a question like that, I won't rule it out. 10% - if I just have to guess at a probability of something - would certainly be my own way of saying "I'm not going to rule it out," but beyond that it wouldn't mean much. I'm so grateful for this further breakdown into the actual study. 🙏

Expand full comment
Apr 23, 2023Liked by Melanie Mitchell

Thank you for this research and post ... am sure I will appending the URL and/or your tweet when I see this stat lazily repeated in other fora.

Expand full comment

Brilliant, thanks for posting.

Expand full comment
Apr 23, 2023Liked by Melanie Mitchell

Thanks, Prof Mitchell, this is useful. Folks like Prof Emily Bender are more likely to hone in on the rhetorical purpose behind the form of the question, like omitting time ranges, as you noted. Or even including such a badly-formukated question in a survey at all.

Expand full comment

One thing all people should know about AI is that it is very, very stupid! It can sound deceptively convincing, but it's genuinely infinitely dumb. Having said that, is there a chance that the end of human civilization will be triggered by AI? Probably, but if that ever happens, it will be because some gullible humans thought it was a good idea to put AI in the driver's seat.

There is amazing potential for AI as a "copilot" (hence what Microsoft is currently doing). I just don't think it's a good idea to try to develop AI systems to be used as "the pilot".

Expand full comment

I would have to agree that the statistic itself is less compelling than it may have looked at first glance.

But the fact that the founder of the field (Geoffrey Hinton) and many other luminaries are so concerned might be reason by itself for alarm.

It's an important enough issue that someone should do a proper survey.

Expand full comment

Prof. Mitchell. Quick question: did everyone who got this question as part of their survey had to submit a response to it? in other words, could it be that the actual number of folks who received this particular question is larger than 162 and only 162 submitted answers to it. A lot of my colleagues at Accenture do send out a lot of surveys but they don't force a response to every single question. At any rate, thanks a lot surfacing these for us!

Expand full comment

I’m surprised more comments aren’t on the funding source coming from FTX, that alone says a lot, considering their reputation and lacking any due diligence...

Expand full comment

AI is probably a continuum and there still is no really universally accepted definition. The kinds of problems that really are worrisome come from human engineering, like the 737-MAX having no engineered kill switch that the pilot could get to. It will always be possible up to the apocalypse at least to use technology irresponsibly.

Expand full comment

Thank you!

Expand full comment