Discussion about this post

User's avatar
Herbert Roitblat's avatar

Nailed it! As usual. The models write fiction. Some of it is also useful. But they have no way to distinguish between fiction and reality because they have no representation of truth. There is only token probability.

Reddy Mallidi's avatar

Not surprising at all, Melanie. I often ask them to find the references (as I am writing my next book), and give the same prompt to Copilot, Perplexity, ChatGPT, Gemini, Claude - the accuracy level is quite bad - Perplexity does a reasonable job; and there is bias - Copilot finds info that is weighed towards Microsoft and others do the same (their side is greener syndrome). That's why I don't trust the output and verify everything - of course they save considerable time for me, but I always approach it from "human in the loop" perspective.

18 more comments...

No posts

Ready for more?