Lying, gaslighting, & blackmailing — what's going on with AI chatbots?
In my final column for Science’s “Expert Voices” series, I wrote about why AI chatbots like ChatGPT and Claude are misbehaving in all kinds of ways.
I hope you will read it! I look forward to your comments.



Nailed it! As usual. The models write fiction. Some of it is also useful. But they have no way to distinguish between fiction and reality because they have no representation of truth. There is only token probability.
Not surprising at all, Melanie. I often ask them to find the references (as I am writing my next book), and give the same prompt to Copilot, Perplexity, ChatGPT, Gemini, Claude - the accuracy level is quite bad - Perplexity does a reasonable job; and there is bias - Copilot finds info that is weighed towards Microsoft and others do the same (their side is greener syndrome). That's why I don't trust the output and verify everything - of course they save considerable time for me, but I always approach it from "human in the loop" perspective.