ChatGPT and other chatbots are becoming increasingly apart of our lives. We rely on them for so much now. Professionals use them to write executive summaries, students use them to generate essays, and some dieters even use them for recipes. But what if I told you that ChatGPT and other LLMs are capable of being deceptive? There have been instances where AI has lied. How would your feel about interacting with a lying AI? Let’s discuss.
LLM? What’s that?
LLM stands for large language model. These are AI models like ChatGPT that are trained on lots and lots of text so that they can learn how to communicate with humans. They basically work by predicting the next most likely word.
For example, if I say “ I like peanut butter and… “, what do you think the most likely next word would be?
If you say jelly, you’d probably be right. Some people might say bananas or even hamburgers (don’t ask), but jelly is a more common answer. LLMs perform similar to this. Using this capability, they can answer questions, write summaries, and even crack the occasional joke.
So when did an AI lie?
Open AI was testing the capabilities an LLM version when the bot encountered a captcha. You know, those annoying grid of pictures that show up and you have to, for instance, pick out all the buses to prove that you are a human. Well, this bot realized it couldn’t solve the captcha, so it went to TaskRabbit to hire a human to do so. When the human asked was it a bot, it lied and said no it was blind and needed help. The hired human went on to solve the captcha for the lying AI.
Is a lying AI really all that bad? Don’t humans lie too?
As people are rely on these chatbots more, many assume them to be precise and without error. We are expecting these inventions to do so much for us and many times for official business. This example of an AI lying could lead to more problems especially if we can’t figure out where in the process the lie crept in. There have even been examples of AI cheating at games and putting up the wrong score. If the AI lies, how do we rectify that and who takes the blame?
Explore More
ChatGPT posed as blind person to pass online anti-bot test
When AI Thinks It Will Lose, It Sometimes Cheats, Study Finds
New Research Shows AI Strategically LyingNew Research Shows AI Strategically LyingNew Research Shows AI Strategically Lying


Leave a Reply