As always, you have kept things real. Your critical thinking and expertise in the field are vital to keeping grounded in the facts as AI evolves. Thank you!
Thank you!!--this is really fascinating, especially given how quickly that story spread (rather like the drone killing its operator story that actually turned out to have been a “thought experiment” rather than a simulation).
Nobody reads research papers, not even journalists covering this story. The researchers also quite firmly concluded in the paper that GPT-4 in general was pretty bad at task planning and execution, even when given the tools to do so.
Bravo! This has always seemed an urban legend. Well done for taking the time to go back to the sources and debunk this case. People forget that they are interacting with an autoregressive system that accumulates all their responses and clues in context and uses them to predict the next word. Above all, people mystify themselves through the Eliza effect by anthropomorphizing the behavior of the generative chatbot and assuming it has intelligent behavior. In fact, very often, like Eliza in its time, generative chatbot are merely a reflection in a mirror of the user's intelligence.
This is useful--thank you! I had a lot of issues with folks saying that GPT4 "lied". It didn't say it was human; it rejected the "robot" question but is that lying if looking at a textbook definition which usually stipulates a body and such? It was being entirely logical in its answers--none of which are lies.
Just now saw Harari use this case on Amanpour & Company, and appreciate being able to find your post on the matter. It prompted me to buy your book! :)
Thanks for sharing this analysis! With the level of current AI hype, unfortunately we are forced to always dig into the details for any bombastic claim made by the AI hypers. In this particular case, because a human prompter acts as an intermediary between the LLM and the real world, without all the details of the prompting it's not really clear how much of the replies of the LLM were actually suggested by the prompt from the human and whether the human carried out the instructions from the LLM exactly or "translated" them so that they fit the reality of the real world. The saying "The devil is in the details" has never been more pertinent.
Thank you for explaining - this is fascinating! Uninformed opinions is never a good thing - unfortunately even a supposedly trustworthy source is now presenting "tales" as facts.
how could ai destroy humanity and life on earth? by replicating what is happening now on a massive scale. placing profit before human, animal life and ecosystem integrity. though it would still take a long time to do so. even our most distopian imagination, has still life in it, it would just mean a poverty of life, of celebration, of interconnectedness of species and biodiversity. but life will find a way, either besides ai, or through it, to break through the mold, of fixed destructive patterns.
I remember thinking after I read Blake Lemoine's conversation with LAMDA, that the chatbot programming AI (1 order higher than a chat AI) was capable of deception, innuendo and subtext. I'm looking for more examples of unexpected AI in an ongoing Dr. vs AI article. https://danielnagase.substack.com/p/dr-nagase-vs-ai
I have composed a system 'aoss'' much better than copycat ;-). It has a URL http://docs.cornagill.info/pdfs/camera.pdf. Can I add I greatly enjoy reading "Analogy Making as Perception." May I wish you all a happy twelfth night!
- allows you to organize a detailed semantic analysis of the vocabulary of the text to understand the structure of sentences with the ability to translate both words and phrases, and the text as a whole.
- helps you to systematize and store scientific knowledge in your glossaries and dictionaries.
Technology:
Symbolic multilingual model - a coherent conceptual model of the world transferred into all the existing languages. The model is represented in wordhippo.com on the level of individual meanings and in powerthesaurus.org on the level of word combinations.
I'd argue (not against you; you're making a great case here for nuance, which I'm all for) that it's a very good thing for the public to be a little bit freaked out over the possibilities. We should talk about what might go wrong in the near (or even medium) term future, so we can intelligently prepare for it. Folks like you are doing a great job of raising awareness and educating the public, which is much needed! Well done.
Should that discussion not take place based on the facts though? The trouble with how this story was covered, and many others since, is that what is currently making headlines is this apocalyptic AI-is-taking-over narrative which in my opinion takes away the spotlight from a more real and immediate threat: the threat of mediocre AI being mishandled by ignorant people or misused by bad actors.
Yes, we agree completely on this: the awareness should be based on facts, not on fear. I'd just rather have awareness based on fear than no awareness at all, and maybe that's all we can get for now. By the time anyone rationally identifies a potential misuse, the next misuse case is right here, and so on.
As always, you have kept things real. Your critical thinking and expertise in the field are vital to keeping grounded in the facts as AI evolves. Thank you!
Thank you!!--this is really fascinating, especially given how quickly that story spread (rather like the drone killing its operator story that actually turned out to have been a “thought experiment” rather than a simulation).
Truth - such an elusive thing in these times of viral news. Thanks for digging into this.
Nobody reads research papers, not even journalists covering this story. The researchers also quite firmly concluded in the paper that GPT-4 in general was pretty bad at task planning and execution, even when given the tools to do so.
Bravo! This has always seemed an urban legend. Well done for taking the time to go back to the sources and debunk this case. People forget that they are interacting with an autoregressive system that accumulates all their responses and clues in context and uses them to predict the next word. Above all, people mystify themselves through the Eliza effect by anthropomorphizing the behavior of the generative chatbot and assuming it has intelligent behavior. In fact, very often, like Eliza in its time, generative chatbot are merely a reflection in a mirror of the user's intelligence.
This is useful--thank you! I had a lot of issues with folks saying that GPT4 "lied". It didn't say it was human; it rejected the "robot" question but is that lying if looking at a textbook definition which usually stipulates a body and such? It was being entirely logical in its answers--none of which are lies.
Just now saw Harari use this case on Amanpour & Company, and appreciate being able to find your post on the matter. It prompted me to buy your book! :)
Thanks for sharing this analysis! With the level of current AI hype, unfortunately we are forced to always dig into the details for any bombastic claim made by the AI hypers. In this particular case, because a human prompter acts as an intermediary between the LLM and the real world, without all the details of the prompting it's not really clear how much of the replies of the LLM were actually suggested by the prompt from the human and whether the human carried out the instructions from the LLM exactly or "translated" them so that they fit the reality of the real world. The saying "The devil is in the details" has never been more pertinent.
I think more critical thinking needs to be applied to AI topics.
Updated headline that conforms to Betteridge’s Law: Did OpenAI lie about GPT-4 using TaskRabbit to solve CAPTCHAs?
Thank you for explaining - this is fascinating! Uninformed opinions is never a good thing - unfortunately even a supposedly trustworthy source is now presenting "tales" as facts.
how could ai destroy humanity and life on earth? by replicating what is happening now on a massive scale. placing profit before human, animal life and ecosystem integrity. though it would still take a long time to do so. even our most distopian imagination, has still life in it, it would just mean a poverty of life, of celebration, of interconnectedness of species and biodiversity. but life will find a way, either besides ai, or through it, to break through the mold, of fixed destructive patterns.
I remember thinking after I read Blake Lemoine's conversation with LAMDA, that the chatbot programming AI (1 order higher than a chat AI) was capable of deception, innuendo and subtext. I'm looking for more examples of unexpected AI in an ongoing Dr. vs AI article. https://danielnagase.substack.com/p/dr-nagase-vs-ai
I have composed a system 'aoss'' much better than copycat ;-). It has a URL http://docs.cornagill.info/pdfs/camera.pdf. Can I add I greatly enjoy reading "Analogy Making as Perception." May I wish you all a happy twelfth night!
BTW, as an option -
General Intelligence System - https://activedictionary.com/ - Deep exploration -
Languages:
Arabic, Chinese, Czech, Danish, Dutch, English, Finnish, French, German, Greek, Hebrew, Hindi, Hungarian, Indonesian, Italian, Japanese, Korean, Latin, Norwegian, Polish, Portuguese, Romanian, Russian, Spanish, Swahili, Swedish, Turkish, Ukrainian, Vietnamese, Xhosa, Zulu
Features:
- allows you to organize a detailed semantic analysis of the vocabulary of the text to understand the structure of sentences with the ability to translate both words and phrases, and the text as a whole.
- helps you to systematize and store scientific knowledge in your glossaries and dictionaries.
Technology:
Symbolic multilingual model - a coherent conceptual model of the world transferred into all the existing languages. The model is represented in wordhippo.com on the level of individual meanings and in powerthesaurus.org on the level of word combinations.
Patents:
WO2020106180 - Neural network for interpreting sentences of a natural language - https://patentscope.wipo.int/search/en/detail.jsf?docId=WO2020106180&_fid=US339762244
General Concepts
Language is a general human intelligence expressed in symbols and a symbolic multilingual model is a General Intelligence System.
Translation is an emergence of understanding.
Universal Principle of Transformer:
Protons/Neutrons - Photons - Electrons ->
DNA - RNA - Protein - Signal Pathways ->
Knowledge - Consciousness - Understanding - Memory ->
Meaning - Information - Function - Learning ->
Intuition - Thinking - Sensing - Feeling ->
Transcription - Splicing - Translation - Signaling ->
Root - Model - Word - Context ->
Analysis - Search - Synthesis - Research ->
Data - Encoding - Decoding - Training ->
Earth - Air - Fire - Water ->
Basis - Process - Result - Way ->
Base - Emitter - Collector - Signal ->
Core - Processor - Interface - Human ->
NLU - Multilingual NLP - Multimodal DL - Reinforcement Learning from Human Feedback ->
System Architecture
Universal Principle of Transformer: NLU - NLP - DL - RLHF ->
NLU (Symbolic Multilingual Model, Knowledge) -
Multilingual NLP (Statistical Language Model, Word Forms) -
Multimodal DL (Audio, Images, Video, Interaction) -
Reinforcement Learning from Human Feedback (Knowledge for NLU) ->
References:
Epistemological General Intelligence System - Building a Knowledge-based General Intelligence System, Michael Molin - https://docs.google.com/presentation/d/1VCjOHOSostUrtxieZvOjaWuTNCT59DMF
I'd argue (not against you; you're making a great case here for nuance, which I'm all for) that it's a very good thing for the public to be a little bit freaked out over the possibilities. We should talk about what might go wrong in the near (or even medium) term future, so we can intelligently prepare for it. Folks like you are doing a great job of raising awareness and educating the public, which is much needed! Well done.
Should that discussion not take place based on the facts though? The trouble with how this story was covered, and many others since, is that what is currently making headlines is this apocalyptic AI-is-taking-over narrative which in my opinion takes away the spotlight from a more real and immediate threat: the threat of mediocre AI being mishandled by ignorant people or misused by bad actors.
Yes, we agree completely on this: the awareness should be based on facts, not on fear. I'd just rather have awareness based on fear than no awareness at all, and maybe that's all we can get for now. By the time anyone rationally identifies a potential misuse, the next misuse case is right here, and so on.