I have a gig writing four columns a year about artificial intelligence for Science Magazine’s “Expert Voices” project. These columns are non-technical and meant to be accessible to a general science-interested audience. My latest column is about “large reasoning models”—models based on LLMs but specially trained to “reason.” These include systems like OpenAI’s o1 and o3 models, as well as DeepSeek’s “R1” model, among others.
This approach has been getting a lot of buzz in the AI world, and has had some really impressive success on several benchmarks meant to test reasoning abilities.
How do these models work? Are they “genuinely" reasoning, or merely mimicking the patterns of human reasoning that they are trained on? And what exactly is reasoning, anyway?
Check out this column for some of my answers to these questions:
Artificial Intelligence Learns To Reason, Science, March 20, 2025
And let me know your comments and questions!
P.S. In case you missed them, here are my previous Science Expert Voices columns:
The Metaphors of Artificial Intelligence, Science, November 14, 2024
The Turing Test and Our Shifting Conceptions of Intelligence, Science, August 15, 2024
Debates on the Nature of Artificial General Intelligence, Science, March 21, 2024
AI's Challenge of Understanding the World, Science, November 10, 2023
How Do We Know How Smart AI Systems Are?, Science, July 13, 2023
As always, thought provoking and unbiased.
Love it! You deserve it Melanie!