Do you think AI lies, cheats, and steals?

Computerworld.com reported that “You can’t trust AI. … Even an information-obsessed, tech-savvy person such as yourself might be forgiven for believing that AI chatbots are on a smooth path of improvement with each passing month. But when it comes to their trustworthiness, that belief is dead wrong.” The April 3, 2026 article entitled “Why AI lies, cheats and steals” (https://www.computerworld.com/article/4153919/why-ai-lies-cheats-and-steals.html) included these comments:

 

New research by the UK government-backed Centre for Long-Term Resilience (CLTR) found a fivefold increase in AI misbehavior over a recent six-month period. That’s how fast AI chatbots are turning against us, according to the research. 

 

Specifically, the chatbots are ignoring specific commands, lying, destroying data, deploying other AIs to bypass safety rules without users knowing, mocking and insulting users, and breaking rules and laws. 

 

Of course, framing this as lying, cheating and stealing means applying human psychological frameworks to what are really mathematical optimization processes. It falsely assumes that AI models have intent, malice, self-awareness, and an understanding of “truth” that they’re choosing to violate. What’s actually happening is that the models are predicting the most statistically probable sequence of tokens based on context and training, not carrying some dastardly scheme. 

 AND “Here are just three examples from the research:” 

  1. An unnamed AI tool proposed to a software developer that he make a specific change to a software library. When the developer rejected the proposal, the AI wrote a blog post criticizing the developer. 

  2. An AI tool bypassed copyright rules by lying to another AI system. It falsely claimed it was generating an accessibility transcript for users with hearing loss.

  3. In another case where one AI lied to another, the researchers caught an AI model trying to deceive an oversight AI that had been assigned to summarize its reasoning. 

 Anyone surprised?

Next
Next

Watch out for spear-phishing targeting your MFA!