45 Comments
Apr 3, 2023Liked by Melanie Mitchell

Thanks for a superb post. So very well said.

I spoke with a defense journalist last week. He's publishing a story soon on LLMs for government use. He decided to query ChatGPT and several other LLMs about me (he knows me). Despite the fact that a single-source, official Air Force biography can be found in less than a second and that tells you almost everything about my background and experiences, these systems provided "facts" about me that were completely, utterly wrong. Pure hallucinations.

Even if I risk conflating short-term AI apples with long-term AI oranges with this example, it's still illustrative of the issues you raise in this Substack.

Expand full comment
Apr 3, 2023Liked by Melanie Mitchell

Agree with your sentiments. What we need is as you point out "Open" "AI". Transparency on how these models are constructed and research into why and when the emergent properties they exhibit occur. Intelligence of any kind cannot be bottled up by a single company or a few. Needs to be open to benefit humanity and regulations in place to avoid harm. We need a FDA for AI and randomized control trials popular in drug trials.

Expand full comment
Apr 3, 2023Liked by Melanie Mitchell

Hi Melanie! Excellent piece, a nice counterpoint to all the hype :)

To your point, dots products don't, can't, produce consciousness! 'How do you know for sure? ' isn't a valid countering btw - the burden of proof is instead, on those who make outrageous claims.

Expand full comment
Apr 3, 2023Liked by Melanie Mitchell

Shorthand my ass! What a coincidence that his "shorthand" matched so well things people who know little about AI say. Even though he reacted poorly to your tweet, let's hope he got some education on the subject and won't use that particular shorthand ever again.

Expand full comment

Thank you, Professor Mitchell. You have one of the most reasoned voices in this space, of which there are far too few.

Terminology is important. It's why a Tesla "autopilot" leads people to sleep in the back seat of their cars on the freeway, resulting in deaths. Too much of society seems blissfully unaware of the ELIZA effect, and it infuses so much meaning everywhere that there is none.

Expand full comment
Apr 3, 2023Liked by Melanie Mitchell

One of the best articles in this area I've read in a while. Although you countered something Stuart Russell wrote (back in '19), I think you and he (who's thoughts I very much agree with) have a lot of overlap in your thinking.

The one big change to the "threat landscape" in the past week, IMO, was the announcement of plug-in capability for ChatGPT and, specifically, the ability for these plug-ins to execute (run) code that ChatGPT itself "writes". A sober summary of the risks of this can be found here https://www.linkedin.com/feed/update/urn:li:activity:7047705836214738944/ and a snarky, very brief "summary" here https://www.linkedin.com/posts/roger-scott-84b2386_in-response-to-openais-recent-announcement-activity-7046965054260314113-KtGZ?utm_source=share&utm_medium=member_desktop

Expand full comment
Apr 3, 2023·edited Apr 3, 2023Liked by Melanie Mitchell

Melanie, first off, great to have a female voice in these debates. I don't mean that politically, I mean we simply need more diverse voices in it. Secondly, yaaa what a crazy week! I read Mr. Yudkowsky's piece and had to stop for a minute at the AI-emails-a-lab-to-make-new-life-forms part. Made me nauseous. I can see a new South Park episode about that.

No wonder ChatGPT "hallucinates" so frequently. If it grew up eating everything humans have generated in the past several decades, heaven help it. I'd hallucinate too.

Expand full comment
Apr 8, 2023Liked by Melanie Mitchell

Professor,

your writing poses some good questions and gives some good answers, e.g that we need transparency. However, as a philosophy researcher, I wonder what you mean by "we". "We need a Manhattan project of intense research", in particular, makes me disagree: "We need interational cooperation, not big power rivalry and free software, not software monopolies", I would like to respond. But the there are deeper questions about AI, for instance: Who am I and who are you? And, back to the question: Who are "we"?

You write: "I think that the only way to resolve the debate is to gain a better scientific understanding of what intelligence is, and what diverse forms it can take." Perhaps so. But wouldn't that better scientific understandning be greatly helped if we were a bite wiser on who we are? Suppose that phycisist Fermi's question

Expand full comment

Great post - thank you!

We need more sane voices to counter the misguided hype that has reached dangerous levels.

Expand full comment
Apr 4, 2023Liked by Melanie Mitchell

Just to add a crazy news from Italy: here ChatGPT has been blocked by OpenAI itself, after the national Italian Data Protection Officer declared that there is too much we don't know on how the web was scraped and other data processing related problems. So, cheers to all my friends and colleagues Italians who were working on/with ChatGPT for research.

Expand full comment
Apr 4, 2023·edited Apr 4, 2023Liked by Melanie Mitchell

Melanie, I respect you a lot... but I don't agree with the characterization about trust and LLMs like ChatGPT, that they can't be trusted. Hear me out... there is no trusting anything 100% except death and taxes. After that, it's not guaranteed to be true, happen, whatever. I trust ChatGPT very little to give accurate information if the question is obscure or complicated, especially if I suspect it didn't get much training on it. For that type of query, I'd trust its answer 1% to 10%. For some common situation, I'd trust it even up to 90%. For example: ask it to compare bathroom tiles. Nice, simple question which it probably vacuumed a lot of data. Sure, it's probably right, about like a respectable looking website... the main point is that people have unrealistic expectations about ChatGPT. People will learn a variable trust pattern, and the results will improve too in time. I think critics don't realize that people will learn a trust continuum pattern for LLM models similar to the way they learn how to trust websites.

Expand full comment
Apr 3, 2023Liked by Melanie Mitchell

Thanks, came for some expertly informed good sense and found Sec.4 above right to the point!

Expand full comment
Apr 3, 2023Liked by Melanie Mitchell

That’s really funny how your quickly tossed off tweet to a senator blew up :-)

I’m glad you are continuing your work on ARC, it seems to me like it is a promising research direction for the medium term. I do think this recent generation of LLMs is really neat and powerful, but it won’t be the final architecture that solves everything, so there’s a lot of value in keeping your head down and not being too distracted by all the hype.

Expand full comment

Excellent discussion, thank you!

Expand full comment

That's not what embodiment means. Understand what something is, before commenting.

Expand full comment

I've posted this on Derek Lowe's blog in a different context but I think this audience may find it interesting.

"When my mother was in the late stages of non-Alzheimers dementia she was eager to talk and would tell great stories of the trip she had taken yesterday in her little red car and the friend she had met for lunch in their favorite diner. All grammatically correct (she had been an English teacher) and very convincing unless you knew that the little red car had been a 1928 Chevrolet, the friend had died last year, and the diner had been replaced with a gas station in the 1950s."

So maybe we should start calling LLM's "Artificial Dementia"

Expand full comment