Please be careful to not use “Dark LLMs”!
ComputerWorld.com reported that “a group of Israeli researchers has found that most AI chatbots can still be easily fooled into providing information that could be harmful, or even illegal.” The May 27, 2025 article entitled “How ‘dark LLMs’ produce harmful outputs, despite guardrails” (https://www.computerworld.com/article/3995563/how-dark-llms-produce-harmful-outputs-despite-guardrails.html) included these comments:
As part of their research into what they call dark LLMs (models that were deliberately created without the safeguards embedded in mainstream LLMs), Michael Fire, Yitzhak Elbazis, Adi Wasenstein, and Lior Rokach of Ben Gurion University of the Negev uncovered a “universal jailbreak attack” that they said compromises multiple mainstream models as well, convincing them to “answer almost any question and to produce harmful outputs upon request.”
That discovery, published almost seven months ago, was the genesis of their current paper, Dark LLMs: The Growing Threat of Unaligned AI Models, which highlighted the still largely unaddressed problem.
LLMs, although they have positively impacted millions, still have their dark side, the authors wrote, noting, “these same models, trained on vast data, which, despite curation efforts, can still absorb dangerous knowledge, including instructions for bomb-making, money laundering, hacking, and performing insider trading.”
Dark LLMs, they said, are advertised online as having no ethical guardrails and are sold to assist in cybercrime. But commercial LLMs can also be weaponized with disturbing ease.
Please be careful out there!