37 Comments
Sep 21Liked by Melanie Mitchell

You are covering an extremely important topic:

GenAI, and more specifically LLMs augmented with additional ground truth info, has commoditized KNOWLEDGE. However, INTELLIGENCE, is uniquely human (i.e. not machine). Even OpenAI's latest model, o1, is mimicking human reasoning, but it is along way off from emulating full human reasoning that includes perception, cognition, empathy and other uniquely human faculties.

Expand full comment
author

You'll find that some of our guests (but not all!) agree with you.

Expand full comment

That makes for a healthy discussion. The deeper one explores the work at Stanford and MIT in "Theory of Mind" (i.e. neuroscience and neuropsychology), the greater one appreciates the wonders of human intelligence. The challenge for most is that they have had 25 years of Google Search training (useable by elementary school children) and little explicit training in critical thinking (what we had to do in the old days, pre-web).

Expand full comment

I agree with Dr. Michael Levin that all life is intelligence. It implies that the mechanism of intelligence should be simple and universal. Cognitive scientists like to talk about its computational nature. But what type of "computation"?

According to Einstein, the measure of intelligence is the ability to change and insanity is doing the same over and over again expecting different results. The words "same", "different", "change" seem to be there for a reason. I define intelligence as the ability to handle differences. It operates with comparable properties. Its core algorithm is a selection of the most fitting option among the available ones with respect to relevant constraints.

Comparison stands behind all cognitive functions. The best illustration of intelligence is the game 20 Questions - note its performance, in up to 20 comparisons we can figure out one of a million categories. It illustrates also how generalization works - if you move backwards.

For more details, consider checking my Substack, for example, this post - https://alexandernaumenko.substack.com/p/foundations-of-generalization

Expand full comment
Sep 21Liked by Melanie Mitchell

Longtime fan, so glad you're hosting a podcast. I'll be sure to check it out.

Expand full comment
Sep 21Liked by Melanie Mitchell

Ooh, looking fwd, Melanie :)

Expand full comment

This look great. I'm excited to check out the podcast. I've been thinking a lot about the nature of intelligence for my forthcoming book about human faculties in the age of AI! Over the summer I produced a whole video on the topic https://www.youtube.com/watch?v=mKT-Bbx-Jyo

Expand full comment
author

Thanks -- I will take a look at your video. A while back (pre-genAI) I wrote a paper with a similar focus: https://arxiv.org/abs/2104.12871

Expand full comment

Oh yeah! I recall skimming this. I’ll take another look.

Expand full comment
Sep 21Liked by Melanie Mitchell

This is the best news of 2024! I’m so excited! Thanks, Melanie!

Expand full comment

Great to hear this, added to my queue. Important topics of conversation for our times. Thank you for agreeing to co-host!

Expand full comment

Will it be posted to the SFI's YouTube channel?

Expand full comment
author

Yes, I believe so.

Expand full comment

Very good discussion re: thought vs/& language in Episode 2. A hierarchical approach might be that language is derived from (i.e. a subset of) one's thoughts and that thoughts are a subset of ones intelligence. Said differently, one can only communicate what one understands unless they are making things up (aka. hallucinating). However, human language can lead to further human understanding (i.e. intelligence): "If you want to learn something, teach it".

A couple of points relevant to AI:

1. Re Steve's comments on LLMs understanding rules: There are no pre-programmed algorithms in AI LLMs wrt language. The LLMs learn exclusively from the training set and subsequent fine tuning and reinforced learning. The training data set may include language rules from its data sets, but it is not pre-programmed.

2. Given that language is a subset of human intelligence, LLMs [mostly] represent that which is known and communicated, and as such, they are a subset of human intelligence. Where progress is being made is in reasoning models, such as OpenAI's o1, where it exercises a recursive loop between linear inferencing and reinforced learning (RLHF). In such limited LLM uses (today) it is beginning to demonstrate human-level reasoning intelligence.

Expand full comment

This first episode was just fantastic and sparked new thinking on my part about a number of things, including the kinds of intelligence beyond that connected to language. One thing I wish you had addressed was Krakauer's repeated use of the word 'computation' when referring to human intelligence. This intelligence-as-machine metaphor is widespread and has been around for a long time but is just a metaphor that I think can confuse and mislead. For example, as I'm sure you know, many people including Roger Penrose have argued convincingly that human intelligence is non-computable. Anyway, I am looking forward to the next episode!

Expand full comment

@Melanie Episode 1 was a good introductory discussion about intelligence. I particularly liked John's example about feelings. That example could be expanded to include all human sensory input that AI can't replicate, and subsequently when human interpretation of such input applied to human experience leads to intuitive-based conclusions.

On the flip side, I thought your guests' understanding of AI was quite limited. Event the Cal library example isn't sufficient as AI training data, unlike libraries, includes relationships amongst words (tokens) in context (aka parameters). Thus, during inferencing (@Ben Dickson understands this well), LLMs can perform functions that are not available in libraries, such as real-time summarization, analysis, comparisons, interpolation, and extrapolation).

Another misunderstanding is prompting (discussed in the trailer). Human prompting using NLP, is not engineering (the application of math & science to problem solving; Google screwed up with this nomenclature). It is a skill; a learned skill, not dissimilar to what Steph Curry had to do to make 3700+ 3-pointers during his NBA career.

I will look forward to a later episode when you provide your own comparative analysis of GenAI to human intelligence, particularly in light of newer models, such as OpenAI's o1, that provide more advanced reasoning capabilities, such as has been demonstrated in genetics, quantum physics and economics.

Expand full comment

I’m interested in exploring the pluses and minuses across life stages 0-100+.

Expand full comment

Huge fan of the Complexity podcast, this is amazing news!

Expand full comment

Perfectly timed, thank you! I will be tuning in.

Expand full comment

This sounds incredible! I can't wait to give it a listen.

Expand full comment

This is awesome and a much-needed conversation. Will subscribe and surely recommend to others. Thanks for doing this!

Expand full comment

Can’t wait to listen and learn!

Expand full comment