Thank you! As opposed to Friedman, and all the other Business Geniuses, I am, or was, and AI Engineer (PhD from CMU, 35 years experience in R&D of AI systems for government-oriented applications in intelligence (counter-deception, insider threat), robotics (NASA), military command and control, and counter-terrorism. I spent my career trying to tell decision makers that Technology X was unlikely to succeed, due to lack of True Positive training examples, computational complexity, and stupid humans. I was not well-liked, but almost always right. Sigh.
Now I fume every day reading idiots like Friedman, etc. lecturing people with just a little less information how only the elites can save us all (meaning themselves). At least Bill Gates once wrote a computer program, what has Sam Altman ever written other than a business plan?
I find practicing guitar settles me down, so I’ll go do that now.
Chris i feel you bro as one vet in the field (40 years) to another thank you for saying what I was thinking. Guitar playing works? I'll have to look into it, my wife is complaining about my screaming into pillows.
Melanie, thank you for writing this. I am shocked by the naivety Friedman displayed in that article. Maybe we should all agree on special turn-off button that goes into all the nuclear weapons.
Nuclear weapons are all air gapped (they have no direct connection to the internet and are in their own internal system). There is a reason why the had never been a single erroneously launched nuke in nearly 100 yrs (100% fail-safe rate for 80 years is a claim very few other Gov systems could make), they have run all plausible scenarios and built in fail-safes and contingencies.
This should be required reading for everyone everywhere. It's remarkable (to me, anyway) how the magical thinking keeps spreading. Then again, it helps explain The Bubble.
What explains the bubble is the false hype spread by all the fat cat VCs who invested in LLMs and the owners of the companies they invested in. Simply put, they invested in false promises and even though they can't deny that bitter truth anymore, they're trying to prop it up with lies and public ignorance so they can protect their irresponsible investments for as long as possible. The public doesn't understand this complicated technology and are easy targets for such propaganda and lies.
Gary Marcus predicted the inadequacies of LLMs long before the release of GPT-5 proved him right—even to sociopaths like Altman, who can no longer deny the facts. Yet Friedman is too lazy to get his facts from impartial people who actually work on AI. Instead he parrots disinformation from people who have clear conflicts of interest. He is no journalist. He is often wrong, but never in doubt. I hope you convinced your mother of this truth.
I wrote to the NYT sometime ago asking to include voices such as yours. They get it so wrong in many of their articles when it comes to AI. It is frustrating as I enjoy reading their other articles.
One of the things that is typical of AI chatbots is something called "hallucinating", where the bot just makes stuff up, but states it in such a way that the naive are convinced of its truth. And I think that Friedman is hallucinating (or to be specific, "bullshitting" -- a real philosophical term (look it up) indicating that the person doesn't even know he's lying). As Melanie has pointed out Friedman doesn't know he's lying because he didn't consult any real experts on AI for his op-ed.
If you want a real expert (two for the priced of one), I'd suggest starting with "AI Snake Oil: What Artificial Ingelligence Can Do, What it Can't and How to Tell the Difference" by Arvind Narayanan and Sayish Kapoor". When I first stumbled across this book I thought it sounded sensationalist until I discovered that Narayanan is the head of the Princeton's Computer Science department, and Kapoor is a student of his who has done real AI coding for Meta. These guys are not hallucinating and know wherof they speak. If you want substantive information about AI, start with this book.
"indicating that the person doesn't even know he's lying"
That's *not* the criterion for bullshit ... it's that the person *doesn't care* whether what they are saying is true--they have no allegiance to the truth, their motivation is solely to persuade people (this is true of virtually everyone on the right today). That's not true of Friedman--he believes what he's saying and he cares whether it's true, but not enough do proper research.
Claiming that "chatbots" (that is, computers running programs) will never have emotions is magical thinking. And I'm correct because of the actual facts as documented by the link, not because of your stupid nonsensical inference from care being an emotion. What is or isn't true of chatbots has nothing to do with Frankfurt's explication of the term "bullshit". Perhaps you're trying to say in the lamest way possible that Friedman is a chatbot, but your comment is sheer idiocy from beginning to end.
Highly recommend Gary Marcus's Substack articles too. No required paywall. Marcus predicted this years ago and suffered ridicule and harassment from the spreaders of "fairy dust" lies about LLMs and AGI.
I feel like such kind of contributions keep piling up as a huge validation of the AI mirror hypothesis. We like to be impressed by our own creations, and vanity seems to disable reasoning since the days of ELIZA.
Dr. Mitchell, your Stone Soup analogy perfectly captures what I call the 'Vagueness Industrial Complex' in my recent post (https://kenclements.substack.com/p/who-you-calln-ai-ai). When Friedman treats everything from pattern matchers to LLMs as one emergent 'AI,' he violates what I think of as the Boy Scout principle: 'If you can't identify it, don't generalize from it.'
Your fact-checking of the Bengali language claim and 'scheming AI' story demonstrates exactly why we need specificity. Instead of asking my 'Essential Field Guide Questions' (What's the actual system? What version? Trained on what?), Friedman gives us breathless reports about what 'AI' did - as if AlphaFold, ChatGPT, and a spam filter are the same species.
The most dangerous part, which you nail perfectly, is when this conceptual sloppiness leads to policy recommendations like 'only AI can regulate AI.' This vagueness isn't accidental - it's profitable for consultants selling undefined 'AI transformation' and companies dodging accountability by blaming 'the algorithm.'
Thank you for cutting through the magical thinking with such clarity. We desperately need more voices demanding specificity before we regulate, adopt, or panic about technologies we haven't even properly identified.
Well done, professor Mitchell. You are right: Tommy Friedman is excitable, widely read, and likely influential. I see below others have suggested you make a contributed Opinion piece to the NYT, and I add my support to that. The chatbots are, well, can be marvelous, even awe inspiring, and the readers need to be told it is by design; no emergent bonus.
Nice work, Melanie. Love the villagers' soup story -- it fits nicely. I'm generally a fan of Tom Friedman mostly due to willingness to point out things others won't at times -- who also has a large audience. I'll post a link to LinkedIn. I have thousands of CXO contacts but unfortunately only a tiny percentage will see it who would benefit. Clearly, one of the big challenges facing society and many of us in our work is the social herding taking place, causing a great deal of misinformation, false beliefs, and very poor decisions. It's been a problem for our duration -- 28 years now, requiring heavy lifting in exec EDU, but became a tsunami in the past three years with LLMs. It requires an enormous commitment to dig deep enough to uncover actual evidence. We currently have record financial incentives to ignore it and jump on the ultra-hyped LLM bandwagon. It's implausible this will end well.
I was never a fan of Friedman and I hope you no longer are. He has demonstrated proof positive he is no journalist. He was too lazy to inform his dangerous opinions with facts from reliable sources, and like an LLM response, wrote what Gary Marcus aptly calls "authoritative bullshit."
Damn that's refreshing. I wish I had said that. In fact, I did, but maybe they will listen to you. You have a better audience.
The lack of critical thinking about AI should be shocking, but it is so ingrained, that it is not.
A major barrier to recognizing the limited functionality of today's models is "anthropogenic debt." Much of the apparent intelligence of these models is due to people (Soylent Green) directly imposing it. People may select training material, they do extensive reinforcement learning. We are going back to the 1980s where every training input needed to be labeled by humans. People write and rewrite prompts. What appears to be machine intelligence is a stochastic reproduction of human input.
People read the headlines (95% of all AI projects fail) without any consideration of what is in the article. We need help! Thanks for being a voice of reason.
Big Tech likes to talk up the capabilities like a toddler with a new shiny toy. The current utility of AI still leaves much to be desired, especially as AI is more than a one trick pony. ie. Not only LLMs.
The problem isn’t just magical thinking about AI systems, it’s how those stories distort what’s actually at stake. Most of these “mysteries” have mundane explanations once you look at the data and the prompts. What matters is how easily hype gets recycled into policy talk. “AI regulating AI” is a good example of handing responsibility to the very systems we don’t fully understand. The danger isn’t that models secretly have agency, it’s that we keep treating them like they do and build governance around the myth.
Thank you! As opposed to Friedman, and all the other Business Geniuses, I am, or was, and AI Engineer (PhD from CMU, 35 years experience in R&D of AI systems for government-oriented applications in intelligence (counter-deception, insider threat), robotics (NASA), military command and control, and counter-terrorism. I spent my career trying to tell decision makers that Technology X was unlikely to succeed, due to lack of True Positive training examples, computational complexity, and stupid humans. I was not well-liked, but almost always right. Sigh.
Now I fume every day reading idiots like Friedman, etc. lecturing people with just a little less information how only the elites can save us all (meaning themselves). At least Bill Gates once wrote a computer program, what has Sam Altman ever written other than a business plan?
I find practicing guitar settles me down, so I’ll go do that now.
Ha
Chris i feel you bro as one vet in the field (40 years) to another thank you for saying what I was thinking. Guitar playing works? I'll have to look into it, my wife is complaining about my screaming into pillows.
Maybe you should try to practice laughing.
It's hard to laugh when the ignorance and conflation of one tool/technique/approach is threatening to bring down the whole field.
Melanie, thank you for writing this. I am shocked by the naivety Friedman displayed in that article. Maybe we should all agree on special turn-off button that goes into all the nuclear weapons.
He's displayed it before on many other topics.
Nuclear weapons are all air gapped (they have no direct connection to the internet and are in their own internal system). There is a reason why the had never been a single erroneously launched nuke in nearly 100 yrs (100% fail-safe rate for 80 years is a claim very few other Gov systems could make), they have run all plausible scenarios and built in fail-safes and contingencies.
This should be required reading for everyone everywhere. It's remarkable (to me, anyway) how the magical thinking keeps spreading. Then again, it helps explain The Bubble.
What explains the bubble is the false hype spread by all the fat cat VCs who invested in LLMs and the owners of the companies they invested in. Simply put, they invested in false promises and even though they can't deny that bitter truth anymore, they're trying to prop it up with lies and public ignorance so they can protect their irresponsible investments for as long as possible. The public doesn't understand this complicated technology and are easy targets for such propaganda and lies.
Gary Marcus predicted the inadequacies of LLMs long before the release of GPT-5 proved him right—even to sociopaths like Altman, who can no longer deny the facts. Yet Friedman is too lazy to get his facts from impartial people who actually work on AI. Instead he parrots disinformation from people who have clear conflicts of interest. He is no journalist. He is often wrong, but never in doubt. I hope you convinced your mother of this truth.
I wrote to the NYT sometime ago asking to include voices such as yours. They get it so wrong in many of their articles when it comes to AI. It is frustrating as I enjoy reading their other articles.
They're in bed with the rich fat cats who are running this govt. Thanks for writing and I hope more people do.
Sadly not only AI. US newspapers are mostly ignorant at best and disingenuous at worst when it comes to science reporting
One of the things that is typical of AI chatbots is something called "hallucinating", where the bot just makes stuff up, but states it in such a way that the naive are convinced of its truth. And I think that Friedman is hallucinating (or to be specific, "bullshitting" -- a real philosophical term (look it up) indicating that the person doesn't even know he's lying). As Melanie has pointed out Friedman doesn't know he's lying because he didn't consult any real experts on AI for his op-ed.
If you want a real expert (two for the priced of one), I'd suggest starting with "AI Snake Oil: What Artificial Ingelligence Can Do, What it Can't and How to Tell the Difference" by Arvind Narayanan and Sayish Kapoor". When I first stumbled across this book I thought it sounded sensationalist until I discovered that Narayanan is the head of the Princeton's Computer Science department, and Kapoor is a student of his who has done real AI coding for Meta. These guys are not hallucinating and know wherof they speak. If you want substantive information about AI, start with this book.
"indicating that the person doesn't even know he's lying"
That's *not* the criterion for bullshit ... it's that the person *doesn't care* whether what they are saying is true--they have no allegiance to the truth, their motivation is solely to persuade people (this is true of virtually everyone on the right today). That's not true of Friedman--he believes what he's saying and he cares whether it's true, but not enough do proper research.
https://en.wikipedia.org/wiki/On_Bullshit
Thanks for that Jibal. As "care" is an emotion, and chatbots don't have emotions (and never will) you are absolutely correct.
Claiming that "chatbots" (that is, computers running programs) will never have emotions is magical thinking. And I'm correct because of the actual facts as documented by the link, not because of your stupid nonsensical inference from care being an emotion. What is or isn't true of chatbots has nothing to do with Frankfurt's explication of the term "bullshit". Perhaps you're trying to say in the lamest way possible that Friedman is a chatbot, but your comment is sheer idiocy from beginning to end.
Highly recommend Gary Marcus's Substack articles too. No required paywall. Marcus predicted this years ago and suffered ridicule and harassment from the spreaders of "fairy dust" lies about LLMs and AGI.
Yes, I have been following Marcus for years too.
I feel like such kind of contributions keep piling up as a huge validation of the AI mirror hypothesis. We like to be impressed by our own creations, and vanity seems to disable reasoning since the days of ELIZA.
Great comment
Dr. Mitchell, your Stone Soup analogy perfectly captures what I call the 'Vagueness Industrial Complex' in my recent post (https://kenclements.substack.com/p/who-you-calln-ai-ai). When Friedman treats everything from pattern matchers to LLMs as one emergent 'AI,' he violates what I think of as the Boy Scout principle: 'If you can't identify it, don't generalize from it.'
Your fact-checking of the Bengali language claim and 'scheming AI' story demonstrates exactly why we need specificity. Instead of asking my 'Essential Field Guide Questions' (What's the actual system? What version? Trained on what?), Friedman gives us breathless reports about what 'AI' did - as if AlphaFold, ChatGPT, and a spam filter are the same species.
The most dangerous part, which you nail perfectly, is when this conceptual sloppiness leads to policy recommendations like 'only AI can regulate AI.' This vagueness isn't accidental - it's profitable for consultants selling undefined 'AI transformation' and companies dodging accountability by blaming 'the algorithm.'
Thank you for cutting through the magical thinking with such clarity. We desperately need more voices demanding specificity before we regulate, adopt, or panic about technologies we haven't even properly identified.
This is great Melanie! We researchers should do more of this crossing over to help people understand AI.
Yes, please!!!
Well done, professor Mitchell. You are right: Tommy Friedman is excitable, widely read, and likely influential. I see below others have suggested you make a contributed Opinion piece to the NYT, and I add my support to that. The chatbots are, well, can be marvelous, even awe inspiring, and the readers need to be told it is by design; no emergent bonus.
Your mother will like it too!
Nice work, Melanie. Love the villagers' soup story -- it fits nicely. I'm generally a fan of Tom Friedman mostly due to willingness to point out things others won't at times -- who also has a large audience. I'll post a link to LinkedIn. I have thousands of CXO contacts but unfortunately only a tiny percentage will see it who would benefit. Clearly, one of the big challenges facing society and many of us in our work is the social herding taking place, causing a great deal of misinformation, false beliefs, and very poor decisions. It's been a problem for our duration -- 28 years now, requiring heavy lifting in exec EDU, but became a tsunami in the past three years with LLMs. It requires an enormous commitment to dig deep enough to uncover actual evidence. We currently have record financial incentives to ignore it and jump on the ultra-hyped LLM bandwagon. It's implausible this will end well.
I was never a fan of Friedman and I hope you no longer are. He has demonstrated proof positive he is no journalist. He was too lazy to inform his dangerous opinions with facts from reliable sources, and like an LLM response, wrote what Gary Marcus aptly calls "authoritative bullshit."
Well Friedman’s article surely made a lot of Luddites crawl out the woodwork - just read the reader’s comments. 😩
Thanks Melanie for your post and I encourage you to write an article to NYT and tell it is straight from a AI savvy and non tech-bro’s mouth.
I second Steen's suggestion.
Damn that's refreshing. I wish I had said that. In fact, I did, but maybe they will listen to you. You have a better audience.
The lack of critical thinking about AI should be shocking, but it is so ingrained, that it is not.
A major barrier to recognizing the limited functionality of today's models is "anthropogenic debt." Much of the apparent intelligence of these models is due to people (Soylent Green) directly imposing it. People may select training material, they do extensive reinforcement learning. We are going back to the 1980s where every training input needed to be labeled by humans. People write and rewrite prompts. What appears to be machine intelligence is a stochastic reproduction of human input.
People read the headlines (95% of all AI projects fail) without any consideration of what is in the article. We need help! Thanks for being a voice of reason.
Thank you! Please see if the NYT will run your piece.
Well done. your mom is undoubtedly proud of you.😀
Big Tech likes to talk up the capabilities like a toddler with a new shiny toy. The current utility of AI still leaves much to be desired, especially as AI is more than a one trick pony. ie. Not only LLMs.
The problem isn’t just magical thinking about AI systems, it’s how those stories distort what’s actually at stake. Most of these “mysteries” have mundane explanations once you look at the data and the prompts. What matters is how easily hype gets recycled into policy talk. “AI regulating AI” is a good example of handing responsibility to the very systems we don’t fully understand. The danger isn’t that models secretly have agency, it’s that we keep treating them like they do and build governance around the myth.
Thank you, Melanie. The amount of disinformation on these issues is staggering.