On Brian Cantwell Smith and the Promise of AI
Today I had the bittersweet pleasure of participating in a symposium honoring the late philosopher Brian Cantwell Smith, a good friend whom I’d known for over 30 years. Before he died last year, Brian held an endowed chair at the University of Toronto: the Reid Hoffman Chair in Artificial Intelligence and the Human. The title is very apt—Brian’s work was very much about what it means to be human, and what that in turn means for AI.
At the symposium, I was asked participate on a panel on AI and the Human, part of which included a 15-minute reflection on how my interests and expertise intersected with Brian’s work.
For anyone who might be interested, I wanted to share my reflection here.
I first met Brian over 30 years ago when he was still at Xerox PARC, and I instantly knew that we were kindred spirits. I was lucky to be able to interact with him again when he visited the Santa Fe Institute, and when he was at Indiana University.
My first exposure to Brian’s work was his book On the Origin of Objects. In that book, Brian did that thing that the best philosophers do: he asked a completely unexpected question that would never occur to ordinary people:
“What must be true of a system in order for it to register an object as an object in the world?”
At first it makes absolutely no sense. But once you wrap your head around it you recognize how profound it is and how hard to answer. According to Brian, our human ability to perceive and conceptualize (“register”) the world, to get meaning out of our sensory percepts, to enable reference from our mental states to something outside of us, are crucial achievements of intelligence, ones that were completely ignored in the GOFAI (“good old fashioned AI”) era when symbols and logic were assumed to be given, not achieved.
I tried to read On the Origin of Objects, but I have to admit that, not being a philosopher, I found it a bit impenetrable. When I really started to understand Brian’s argument was upon reading his next book, called The Promise of Artificial Intelligence, which came out in 2019, the same year my own book on AI was published. That was still in the dark ages of AI, before the generative AI boom.
Brian and I were both grappling with the same questions in our respective books: the nature of current AI, and what it would take for machines to gain something like human understanding. My book particularly focused on what the philosopher Gian-Carlo Rota called, the “Barrier of Meaning” and whether AI would ever crash this barrier.
While skeptical of the ability of AI systems to “understand” anything, we both were seeing AI start to transform the world. In his book, Brian did not underestimate this transformation. He wrote,
“The rise of computing and AI is of epochal significance, likely to be as consequential as the Scientific Revolution—an upheaval that will profoundly alter our understanding of the world, ourselves, and our (and our AIs’) place in that world.”
But Brian didn’t mince words about the path AI was currently on:
“Neither deep learning, nor other forms of second-wave AI, nor any proposals yet advanced for third-wave, will lead to genuine intelligence.”
A main thrust of Brian’s book is the difference between what he termed reckoning and judgment.
Judgment, he wrote, is “a form of dispassionate deliberative thought, grounded in ethical commitment and responsible action, appropriate to the situation in which it is deployed.”
Dispassionate here means fair, unbiased, and open-minded, but it includes a commitment to, and a caring about, the world—in Brian’s words, a “dispassionate passion”. In Brian’s view, “Judgment is the standard to which human thinking should aspire.”
On the other hand, there is reckoning: the “calculative prowess” at which AI systems already excel. AI reckoning has lead to gold medals in mathematics competitions, to generating complex code, to predicting protein structure, and even to carrying out fluent conversations. But reckoning without judgment is a dangerous thing:
“The difference between reckoning and judgment...highlights the need for a textured map of intelligence’s kinds—a map in terms of which to explain why reckoning systems are so astonishingly powerful in some respects, yet fall so spectacularly short in others.”
Brian went on:
“No historical or current approaches to AI, nor any I see on the horizon, have even begun to wrestle with the question of what constructing or developing judgment would involve.”
While Brian wrote that in 2019, it remains true today. I also agree completely with what Brian feared most about AI. He wrote,
“I take seriously the fact that we will soon need to learn how to live in productive communion with synthetic intelligent creatures of our own (and ultimately their) design. Two things do terrify me though: (1) that we will rely on reckoning systems in situations that require genuine judgment; and (2) by being unduly impressed by reckoning prowess, we will shift our expectations on human mental activity in a reckoning direction.”
In my own book I quoted the economist Sendhil Mullainathan saying the first part another way:
“We should be afraid. Not of intelligent machines. But of machines making decisions that they do not have the intelligence to make. I am far more afraid of machine stupidity than of machine intelligence.”
And the philosopher Shannon Vallor eloquently echos Brian’s second part. She wrote about how she had asked one of the AI “godfathers” about his claim that we will soon see superhuman AI:
“How, I asked, does an AI system without the human capacity for conscious self-reflection, empathy or moral intelligence become superhuman merely by being a faster problem-solver? Aren’t we more than that? And doesn’t granting the label “superhuman” to machines that lack the most vital dimensions of humanity end up obscuring from our view the very things about being human that we care about?”
In The Promise of Artificial Intelligence, Brian argued that genuine intelligence “requires ‘getting up out’ of internal representations and being committed to the world as world, in all its unutterable richness.”
Rereading this, I was reminded of a wonderful paper from 2021, written by Deb Raji and others, called AI and the Everything in the Whole Wide World Benchmark. The paper starts out describing a children’s book:
“In the 1974 Sesame Street children’s storybook Grover and the Everything in the Whole Wide World Museum, the Muppet monster Grover visits a museum claiming to showcase everything in the whole wide world. Example objects representing certain categories fill each room. Several categories are arbitrary and subjective, including showrooms for ‘Things You Find On a Wall’ and ‘The Things that Can Tickle You’ Room. Some are oddly specific, such as ‘The Carrot Room’ while others unhelpfully vague like ‘The Tall Hall’. When he thinks that he has seen all that is there, Grover comes to a door that is labeled ‘Everything Else’. He opens the door, only to find himself in the outside world.”
This is such a wonderful metaphor for AI and so aligned with Brian’s vision: however much AI systems are trained on, there is always that door labeled “everything else”. And as yet, AI can’t reliably deal with “everything else”.
How do we humans deal with that daunting “everything else”? Brian wrote that, in On the Origin of Objects, he had “outlined a picture of the world in which objects, properties, and other ontological furniture of the world were recognized as the results of registrational practices, rather than being the pregiven structure of the world....It depicts a world of stupefying detail and complexity, which epistemic agents register—that is, find intelligible, conceptualize and categorize—in order to be able to speak and think about it, at and conduct their projects, and so on.”
It is our embedding and engagement, our ability to conceptualize the world and ourselves in it—what AI folks nowadays like to call “having a world model”—the thing that AI doesn’t currently have, the thing that allows we humans to figure out what to do in the novel situations we continually find ourselves in.
As Brian wrote,
“How we register the world...find it ontologically intelligible in such a way as to support our projects and practices—is in my judgment the most important task to which intelligence is devoted.”
He went on:
“AI needs to take on board one of the deepest intellectual realizations of the last 50 years...that taking the world to consist of discrete intelligible mesoscale objects is an achievement of intelligence, not a premise on top of which intelligence runs. AI needs to explain objects, properties, and relations, and the ability of creatures to find the world intelligible in terms of them; it cannot assume them.”
This was written before the advent of large language models, but it is excruciatingly true for them—they are built entirely upon the achievements of human intelligence. They are handed the world on a silver platter in the form of human written text.
My former Ph.D. advisor Douglas Hofstadter recounted in his book Metamagical Themas how he had attended a talk by Herbert Simon, Nobel laureate and pioneer of classical symbolic AI. According to Hofstadter, Simon argued that “Everything of interest in cognition happens above the 100-millisecond level—the time it takes to recognize your mother.”
Hofstadter believed the exact opposite—that everything of interest in cognition happens below the 100-millisecond level: “Perception is where it’s at!”
This aligns with Brian’s view. But according to Brian, where it’s also at is participation in and commitment to the world:
“Not only must a system be able to distinguish appearance from reality—right from wrong—but it must care about the difference.” “For an AI system to register an object as an object, that is, not only must there be right and wrong for it, but that difference must matter, to it.”
Moreover, Brian emphasized how essential a sense of self is to an intelligent agent:
“An understander—human, AI, whatever—cannot take an object to be an object until that understander takes itself to be a knower that can take an object to be an object.”
Today’s LLM chatbots can talk about themselves, their beliefs and desires, but they don’t actually have any, since they don’t have a “self” for which anything is at stake. At least that’s my view.
As Brian wrote, “Most of the computational systems we construct...represent the world in ways that matter to us, not to them…Nothing matters to them...They don’t give a damn.”
The great philosopher Margaret Boden said something similar: “The robots won’t take over because they couldn’t care less.”
In The Promise of Artificial Intelligence, Brian eloquently wrote about how our human concepts were not yet up to the task of understanding intelligence. We need new concepts to be able to make sense of AI:
“Once we have the conceptual equipment adequate to the task of taking stock of them—of testing their mettle—we can take of the raft of detailed questions we will need to answer, even only if inchoately and provisionally at first, in order to reach our aim of sensibly deploying these systems in ways that are sound, beneficial, practicable, and sane.”
Indeed this is the task that Brian set for himself, and he worked on it for the remainder of his life.
I’ll end with one more quote from The Promise of Artificial Intelligence:
“Automated reckoning systems will transform human existence. But to understand their capacities, liabilities, impacts, and ethics, and to understand what assemblages of people and machines should be assigned what kinds of task, we need to understand what intelligence is, what AI has accomplished, and what kinds of work require what kinds of capacity. Only with a carefully delineated map can we wisely choreograph the world we are developing—the world we will jointly inhabit.”
We are still, as ever, in desperate need of such a map. As a fellow cartographer of the nature of intelligence, I’m so grateful to Brian for lighting the way. I only wish he could have accompanied all of us further on this journey.



The reckoning vs. judgment distinction might be the most useful framework I've come across for thinking about AI in high stakes professional settings. I practice law and teach a course on AI and Litigation at a Law School here in New York, and this framing lands harder when you live in a world where the final decision isn't called a "calculation." It's called a judgment. That's not a coincidence. What we do in courtrooms requires exactly what Smith was describing: registering context, caring about what's actually at stake for real people, navigating all the messy stuff behind that door Grover opens.
What worries me is that I'm already watching Smith's first fear play out in legal practice. Firms are throwing reckoning tools at problems that demand genuine judgment, and nobody has a conceptual map for knowing when they've crossed that line. The fallout isn't abstract. It's hallucinated case references (that attorneys don't bother to check), botched filings, and clients who were failed by the lawyers who were supposed to protect them.
This piece should be mandatory reading for anyone building or deploying AI where the consequences can't be undone. Really grateful you wrote this, Melanie. It crystallized something I've been trying to articulate to my students and other lawyers alike.
Really lovely tribute. I enjoyed how you fit his work into this larger conversation of AI’s evolution and how the “soft” characteristics that make us human are truly the most challenging things to systemically reason and create