Discussion about this post

User's avatar
Teddy Weverka's avatar

In a survey with only 17% response rate, you have a self-selected group that is not representative of the whole. That self-selection bias casts doubt on any results. Real surveys with that low of a response rate endeavor to hunt down a follow up group from those who did not respond to get an idea of how bad the self-selection bias is in their returns. This poll was conducted by people who know very little about conducting a survey. No reputable journal would publish this.

Expand full comment
Jakub Kraus's avatar

Apparently 559 participants responded to a similar question? Quoting from https://twitter.com/aijaynes/status/1649638918306791424

"""

I think 559 answered the question, so 559/738 ~= 75.7% of the 738 survey respondents, and 559/4271 ~= 13.1% of the experts contacted.

...

These 559 researchers gave probabilities for the impacts of advanced AI being "extremely good (e.g. rapid growth in human flourishing," "on balance good," "more or less neutral," "on balance bad," or "extremely bad (e.g. human extinction)."

For "extremely bad," the median was 5% and the mean was ~14.1%.

- 50/559 ~= 8.9% placed the probability at 50% or higher

- 151/559 ~= 27.0% said 20% or higher

- 266/559 ~= 47.6% said 10% or higher

- 373/559 ~= 66.7% said 5% or higher

- 420/559 ~= 75.1% said 1% or higher

To gain clarity on participants' responses, AI Impacts "also asked a subset of participants one of the following questions":

(1) What probability do you put on future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species?

(2) What probability do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species?

Median response to (1) was 5%, median response to (2) was 10%.

149 people responded to (1) and 162 people responded to (2), which was the source of the confusion at the start of this thread!

- 66/149 ~= 44.3% said 10% or higher for (1)

- 90/162 ~= 55.6% said 10% or higher for (2)

"Different random subsets of respondents" received (1) and (2), so noise might explain why (2) got higher probabilities than (1), even though (2) is basically a more specific version of (1) (and hence should have lower probability).

I computed all this in the anonymous dataset from AI Impacts article https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/. The relevant columns were CZ (viral 48% stat), MX (149 respondents), and MY (162 respondents).

All of this was somewhat confusing to figure out. Given that the 48% stat has entered mainstream media (and journalists are presenting it in misleading ways), I think @AIImpacts should publish an article clarifying this stat.

"""

Expand full comment
25 more comments...

No posts