23 Comments
Sep 21Liked by Melanie Mitchell

Longtime fan, so glad you're hosting a podcast. I'll be sure to check it out.

Expand full comment
Sep 21Liked by Melanie Mitchell

You are covering an extremely important topic:

GenAI, and more specifically LLMs augmented with additional ground truth info, has commoditized KNOWLEDGE. However, INTELLIGENCE, is uniquely human (i.e. not machine). Even OpenAI's latest model, o1, is mimicking human reasoning, but it is along way off from emulating full human reasoning that includes perception, cognition, empathy and other uniquely human faculties.

Expand full comment
author

You'll find that some of our guests (but not all!) agree with you.

Expand full comment

That makes for a healthy discussion. The deeper one explores the work at Stanford and MIT in "Theory of Mind" (i.e. neuroscience and neuropsychology), the greater one appreciates the wonders of human intelligence. The challenge for most is that they have had 25 years of Google Search training (useable by elementary school children) and little explicit training in critical thinking (what we had to do in the old days, pre-web).

Expand full comment

I agree with Dr. Michael Levin that all life is intelligence. It implies that the mechanism of intelligence should be simple and universal. Cognitive scientists like to talk about its computational nature. But what type of "computation"?

According to Einstein, the measure of intelligence is the ability to change and insanity is doing the same over and over again expecting different results. The words "same", "different", "change" seem to be there for a reason. I define intelligence as the ability to handle differences. It operates with comparable properties. Its core algorithm is a selection of the most fitting option among the available ones with respect to relevant constraints.

Comparison stands behind all cognitive functions. The best illustration of intelligence is the game 20 Questions - note its performance, in up to 20 comparisons we can figure out one of a million categories. It illustrates also how generalization works - if you move backwards.

For more details, consider checking my Substack, for example, this post - https://alexandernaumenko.substack.com/p/foundations-of-generalization

Expand full comment
Sep 21Liked by Melanie Mitchell

This is the best news of 2024! I’m so excited! Thanks, Melanie!

Expand full comment

Great to hear this, added to my queue. Important topics of conversation for our times. Thank you for agreeing to co-host!

Expand full comment
Sep 21Liked by Melanie Mitchell

Ooh, looking fwd, Melanie :)

Expand full comment

Will it be posted to the SFI's YouTube channel?

Expand full comment
author

Yes, I believe so.

Expand full comment

This is awesome and a much-needed conversation. Will subscribe and surely recommend to others. Thanks for doing this!

Expand full comment

Can’t wait to listen and learn!

Expand full comment

Looking forward to the discussion. Will this latest wave of AI overpromising and underdelivering lead to yet another AI winter? Is ChatGPT AI just a combination of the Eliza and Expert systems that depends on the human urge to anthropomorphize machines as conscious? Going to find my old copy of Kauffman's "Reinventing the Sacred" to get his take on it. Spoiler alert. As I recall he does not agree with the connectionist claims but proposes something along the lines of Penrose and Hammeroff.

Expand full comment

You should probably be paid more. Come to think of it, I should probably be paid more too.

Expand full comment

Wonderful news. I'll celebrate your inaugural broadcast from somewhere on Lake Superior.

Expand full comment

A big problem we, as scientists, have faced for a long time is defining "intelligence" and "consciousness" across humans, animals and machines. It's difficult, but we have to do it, whether in broad or narrow contexts. Perhaps building a grid of definitions with objective criteria would help. Otherwise, we're just talking without truly understanding, which reminds me of certain generative AI tools. 😉

Expand full comment

A grid is surely the answer.

Expand full comment

I'm not sure to what extent (seriousness, irony) I should take your comment. Nevertheless, efforts at definition are important, especially when dealing with vague and complex concepts. It's difficult, but for example, in medicine, they do it. Moreover, it seems to me that we know enough today about these subjects to begin such an effort.

Expand full comment

Yes, but what you are saying seems to trivialise the field a bit. Like, "If only we had just tried writing down the objective criteria we'd have sorted it out". But it's not that these concepts are "vague" or even that they are "complex", it's that they are filled with paradox and circularity. This is because they are inherently subjective so that the scientific method has trouble with them, and "objective criteria" sadly don't cut the mustard. Unbelievably smart and dedicated people spend their whole lives on these issues. Personally, not being that smart I got out and indeed I've been attracted increasingly towards the medical end of brain research as I've aged, where I might actually achieve something tangible and helpful.

Expand full comment

Looking forward listening and getting the message conveyed in this popular type of medium... podcast!

Expand full comment

Bravo! Very promising!

Expand full comment

Wonderful! Just what we need! How many kinds of intelligence are there? Is Intelligence as structure or a process? Is intelligence static, dynamic, generative or transformative? Is it the size of neurons in CNN and parameter count in LLM/LWM- or a function of the level of complexity generated by the brain or AI models? Or a function of the intelligence embedded in the prompting? is it stable or ephemary? Is Latent Space Activation part of intelligence? Is it what is measured in tests, or observed in human behaviour? Is is analysis of human intelligence -like AI-meaningful to split in training and inference? Isn’t Technological Intelligence s better term than Artificial? Is inability to automate, a flaw in human intelligence? Lots of questions requiring an answer -before maximum leverage from the full inherent capacity in LLM/LWM Models can be expected. So, really looking forward to your podcast.

Expand full comment