Machines are now able to generate text that is hard to distinguish from that generated by humans. This gives rise to many potential problems; for example, enabling efficient automation of misinformation, spam, impersonation, student cheating on writing and coding homework, and so on. How can society deal with the upcoming deluge of machine-generated text? This is an urgent problem, but fortunately there is a lot of promising research in the works aimed at enabling people to detect when a text has been generated by a machine.
Thanks for your great articles. I wonder whether "tweaking the language" could be a better alternative. In war time, people develop codes/slang/jargon, create unique metaphors and unusual ways to use specific words, develop their own "community/tribe" language. This is why, as an example, I think it is very difficult for Generative AI to adapt and create TikTok content (teens I know laugh at content that is generated for teen platforms). Humans will always find ways to distord the language faster than machine can comprehend it. They have done it to resist other groups of humans, so why not AI ;-)
My book covers earlier examples of it (e.g., automatic image caption generation, machine translation), but not the latest "large language model revolution". I'm in the process of writing some new chapters on these later developments.
A very interesting book. The material is mostly unfamiliar to me. I liked the discussion about symbolic vs subsymbolic approaches. May be because our authentication appraoch is a bit subsymbolic. Of course, we did not know this when we developed it.
Thanks for a great book. Looking forward to finishing it soon :)
Hi Melanie, excellent article!
BTW, the humorous alt to the Shakespeare one is this one by Mark Twain :)
https://www.allgreatquotes.com/huckleberry-finn-quote-127/
Thanks for your great articles. I wonder whether "tweaking the language" could be a better alternative. In war time, people develop codes/slang/jargon, create unique metaphors and unusual ways to use specific words, develop their own "community/tribe" language. This is why, as an example, I think it is very difficult for Generative AI to adapt and create TikTok content (teens I know laugh at content that is generated for teen platforms). Humans will always find ways to distord the language faster than machine can comprehend it. They have done it to resist other groups of humans, so why not AI ;-)
Hi, We were wondering if Generative AI was covered in the book? Is it under a different name?
My book covers earlier examples of it (e.g., automatic image caption generation, machine translation), but not the latest "large language model revolution". I'm in the process of writing some new chapters on these later developments.
A very interesting book. The material is mostly unfamiliar to me. I liked the discussion about symbolic vs subsymbolic approaches. May be because our authentication appraoch is a bit subsymbolic. Of course, we did not know this when we developed it.
Thanks for a great book. Looking forward to finishing it soon :)
" I'm in the process of writing some new chapters on these later developments. "..
Once done, if possible, can you please publish those here also? Interested in reading those.