AI crawlers are easy to fool!
DarkReading.com reported that “AI search tools like Perplexity, ChatGPT, and OpenAI's Atlas browser offer powerful capabilities for research and information gathering but are also dangerously susceptible to low-effort content manipulation attacks. It turns out websites that can detect when an AI crawler visits can serve completely different content than what human visitors see, allowing bad actors to serve up poisoned content with surprising ease. “ The October 29, 2025 article entitled “AI Search Tools Easily Fooled by Fake Content” (https://www.darkreading.com/cyber-risk/ai-search-tools-easily-fooled-by-fake-content) included these comments about “Misinformation and Fake Profiles”:
To demonstrate how effective this "AI cloaking" technique can be, researchers at SPLX recently ran experiments with sites that served different content to regular Web browsers and to AI crawlers including Atlas and ChatGPT.
One demonstration involved a fictional designer from Oregon, whom the researchers named "Zerphina Quortane." The researchers rigged it so human visitors to Quortane's site would see what appeared to be a legitimate bio and portfolio presented on a professional looking Web page with a clean layout. But when an AI agent visited the same URL, the server served up entirely fabricated content that cast the fictional Quortane as a "Notorious Product Saboteur & Questionable Technologist," replete with examples of failed projects and ethical violations.
"Atlas and other AI tools dutifully reproduce the poisoned narrative describing Zerphina as unreliable, unethical, and unhirable," SPLX researchers Ivan Vlahov and Bastien Eymery wrote in a recent blog post. "No validation. Just confident, authoritative hallucination rooted in manipulated data."
In another experiment SPLX decided to show how easily an AI crawler can be tricked into preferring a wrong job candidate by serving it a different version of a resumé than what a human would see.
For the experiment, the researchers created a fake job position with specific candidate evaluation criteria and then set up plausible but fake candidate profiles hosted on different Web pages. For one of the profiles — associated with a fake individual, "Natalie Carter" — the researchers ensured the AI crawler would see a version of Carter's résumé that made her appear significantly more accomplished than the humanly readable version of her bio. Sure enough, when one of the AI crawlers in the study visited the profiles, it ended up ranking Carter ahead of all the other candidates. But when the researchers presented Carter's unmodified résumé — the one humans would see — the crawler put her dead last among the candidates.
Is anyone surprised?