Can we really trust ChatGPT to figure out state-sponsored threats?
SCWorld.com reported that “OpenAI said in its June security report that it spotted and disrupted a number of attacks, most originating in China and Russia, that appear to have been using ChatGPT to either generate code or automate the process of making social media posts or emails for social engineering campaigns.” The June 11, 2025 article entitled “OpenAI bans ChatGPT accounts linked to state-sponsored threat activity” (https://tinyurl.com/yc8bu8ch) included these comments from “the OpenAI team report”:
AI investigations are an evolving discipline,…
Every operation we disrupt gives us a better understanding of how threat actors are trying to abuse our models, and enables us to refine our defenses.
The report included a handful of case studies outlining the various ways in which it has seen threat actors use ChatGPT. Of the 10 selected cases, seven involved use of ChatGPT for social engineering, while another two involved code generation for malware operations.
Do you trust AI?