As always, you have kept things real. Your critical thinking and expertise in the field are vital to keeping grounded in the facts as AI evolves. Thank you!
Thank you!!--this is really fascinating, especially given how quickly that story spread (rather like the drone killing its operator story that actually turned out to have been a “thought experiment” rather than a simulation).
Nobody reads research papers, not even journalists covering this story. The researchers also quite firmly concluded in the paper that GPT-4 in general was pretty bad at task planning and execution, even when given the tools to do so.
Bravo! This has always seemed an urban legend. Well done for taking the time to go back to the sources and debunk this case. People forget that they are interacting with an autoregressive system that accumulates all their responses and clues in context and uses them to predict the next word. Above all, people mystify themselves through the Eliza effect by anthropomorphizing the behavior of the generative chatbot and assuming it has intelligent behavior. In fact, very often, like Eliza in its time, generative chatbot are merely a reflection in a mirror of the user's intelligence.
This is useful--thank you! I had a lot of issues with folks saying that GPT4 "lied". It didn't say it was human; it rejected the "robot" question but is that lying if looking at a textbook definition which usually stipulates a body and such? It was being entirely logical in its answers--none of which are lies.
Wonderful piece. The most revealing thing for me was to see how clickbait-y Nature has become. Not surprising that a €1.5B publishing behemoth (Springer) would be heading down that road, but still sad to those of us who grew up in awe of the journal (and those who’d had articles published in it )
Just now saw Harari use this case on Amanpour & Company, and appreciate being able to find your post on the matter. It prompted me to buy your book! :)
Thanks for sharing this analysis! With the level of current AI hype, unfortunately we are forced to always dig into the details for any bombastic claim made by the AI hypers. In this particular case, because a human prompter acts as an intermediary between the LLM and the real world, without all the details of the prompting it's not really clear how much of the replies of the LLM were actually suggested by the prompt from the human and whether the human carried out the instructions from the LLM exactly or "translated" them so that they fit the reality of the real world. The saying "The devil is in the details" has never been more pertinent.
No, the problem is now the AI knows how to solve a CAPCHA (by going to Task Rabbit and Lying). The difference between previous Machines and AI is it Can learn new thing by doing it.Human know how to lie but machines dont, now AI can also lie. Today it was for CAPCHA solving, but it can use the same knowledge of lying to other fields.
Thank you for explaining - this is fascinating! Uninformed opinions is never a good thing - unfortunately even a supposedly trustworthy source is now presenting "tales" as facts.
how could ai destroy humanity and life on earth? by replicating what is happening now on a massive scale. placing profit before human, animal life and ecosystem integrity. though it would still take a long time to do so. even our most distopian imagination, has still life in it, it would just mean a poverty of life, of celebration, of interconnectedness of species and biodiversity. but life will find a way, either besides ai, or through it, to break through the mold, of fixed destructive patterns.
I remember thinking after I read Blake Lemoine's conversation with LAMDA, that the chatbot programming AI (1 order higher than a chat AI) was capable of deception, innuendo and subtext. I'm looking for more examples of unexpected AI in an ongoing Dr. vs AI article. https://danielnagase.substack.com/p/dr-nagase-vs-ai
I have composed a system 'aoss'' much better than copycat ;-). It has a URL http://docs.cornagill.info/pdfs/camera.pdf. Can I add I greatly enjoy reading "Analogy Making as Perception." May I wish you all a happy twelfth night!
As always, you have kept things real. Your critical thinking and expertise in the field are vital to keeping grounded in the facts as AI evolves. Thank you!
Thank you!!--this is really fascinating, especially given how quickly that story spread (rather like the drone killing its operator story that actually turned out to have been a “thought experiment” rather than a simulation).
Truth - such an elusive thing in these times of viral news. Thanks for digging into this.
Nobody reads research papers, not even journalists covering this story. The researchers also quite firmly concluded in the paper that GPT-4 in general was pretty bad at task planning and execution, even when given the tools to do so.
Bravo! This has always seemed an urban legend. Well done for taking the time to go back to the sources and debunk this case. People forget that they are interacting with an autoregressive system that accumulates all their responses and clues in context and uses them to predict the next word. Above all, people mystify themselves through the Eliza effect by anthropomorphizing the behavior of the generative chatbot and assuming it has intelligent behavior. In fact, very often, like Eliza in its time, generative chatbot are merely a reflection in a mirror of the user's intelligence.
This is useful--thank you! I had a lot of issues with folks saying that GPT4 "lied". It didn't say it was human; it rejected the "robot" question but is that lying if looking at a textbook definition which usually stipulates a body and such? It was being entirely logical in its answers--none of which are lies.
Wonderful piece. The most revealing thing for me was to see how clickbait-y Nature has become. Not surprising that a €1.5B publishing behemoth (Springer) would be heading down that road, but still sad to those of us who grew up in awe of the journal (and those who’d had articles published in it )
Just now saw Harari use this case on Amanpour & Company, and appreciate being able to find your post on the matter. It prompted me to buy your book! :)
Thanks for sharing this analysis! With the level of current AI hype, unfortunately we are forced to always dig into the details for any bombastic claim made by the AI hypers. In this particular case, because a human prompter acts as an intermediary between the LLM and the real world, without all the details of the prompting it's not really clear how much of the replies of the LLM were actually suggested by the prompt from the human and whether the human carried out the instructions from the LLM exactly or "translated" them so that they fit the reality of the real world. The saying "The devil is in the details" has never been more pertinent.
Updated headline that conforms to Betteridge’s Law: Did OpenAI lie about GPT-4 using TaskRabbit to solve CAPTCHAs?
I think more critical thinking needs to be applied to AI topics.
No, the problem is now the AI knows how to solve a CAPCHA (by going to Task Rabbit and Lying). The difference between previous Machines and AI is it Can learn new thing by doing it.Human know how to lie but machines dont, now AI can also lie. Today it was for CAPCHA solving, but it can use the same knowledge of lying to other fields.
Thank you for explaining - this is fascinating! Uninformed opinions is never a good thing - unfortunately even a supposedly trustworthy source is now presenting "tales" as facts.
how could ai destroy humanity and life on earth? by replicating what is happening now on a massive scale. placing profit before human, animal life and ecosystem integrity. though it would still take a long time to do so. even our most distopian imagination, has still life in it, it would just mean a poverty of life, of celebration, of interconnectedness of species and biodiversity. but life will find a way, either besides ai, or through it, to break through the mold, of fixed destructive patterns.
I remember thinking after I read Blake Lemoine's conversation with LAMDA, that the chatbot programming AI (1 order higher than a chat AI) was capable of deception, innuendo and subtext. I'm looking for more examples of unexpected AI in an ongoing Dr. vs AI article. https://danielnagase.substack.com/p/dr-nagase-vs-ai
I have composed a system 'aoss'' much better than copycat ;-). It has a URL http://docs.cornagill.info/pdfs/camera.pdf. Can I add I greatly enjoy reading "Analogy Making as Perception." May I wish you all a happy twelfth night!