Discussion about this post

User's avatar
Marco Masi's avatar

I think that, sooner or later, we will have to acknowledge how the word “intelligence” can't be separated so simplistically from consciousness. Because true intelligence necessitates a semantic understanding of concepts, data, sensory perceptions of the world and the environment, etc. Meaning, the semantic content, the real understanding of a symbol, picture, word, or sentence isn't in the symbol, picture, word, or sentence themselves. Even knowing all the relationships isn't sufficient. Because meaning is intrinsically connected with a subjective experience, with sentience, with some sort of sensation. You can't understand what a color means if you haven't had the experience of the redness of a tomato, the blueness of the sky or the greenness of a plant. You can't really understand what light is if you are congenitally blind, other than talking about it based on inferences starting from what others say. You can't understand what the taste of chocolate is without having tasted chocolate. That's why it is so hard to build self-driving cars. I would never take a ride with a human who hasn't a semantic awareness of the environment, and “sees” only numbers or streams of bit instead of experiencing the world and, thereby, acquiring a semantic understanding of what cars, pedestrians, bicycle, houses, etc. are. There is nothing in the machine that “understands” in the human sense, other than being quite clever in imitating its intelligence. That's why the so impressive LLMs might sound so human, but suddenly and unexpectedly at some point might furnish an utterly nonsensical answer (e.g., you ask how many rocks one should eat per day, and it recommends “at least one”.) No consciousness, no semantic awareness, no intelligence.

Expand full comment
Jim Brady's avatar

Well written, thought-provoking article.

As each year passes, Turing's thought experiment becomes less relevant and more moot, as a measure

of our current AI achievements. Let's just conclude that AI passes it already, and move on to "better" metrics.

But even more recent metrics such as ARC could be considered limited in some ways, for example an AI that understands understands rotation, translation, shapes, and analogues, could probably pass ARC. But such an AI could fail simple numeric tests. So we will still not have arrived. It's like we need a multi-ARC spanning ten disciplines.

In any case, when the ARC metric is bested, we will likely be back here discussing its limitations

like we are now discussing those of the Turing test.

Expand full comment
27 more comments...

No posts