Think twice before using ChatGPT’s Atlas web browser!
DarkReading.com reported that “As a new AI-powered Web browser brings agentics closer to the masses, questions remain regarding whether prompt injections, the signature LLM attack type, could get even worse.” The November 26, 2025 article entitled “Prompt Injections Loom Large Over ChatGPT's Atlas Browser” (https://www.darkreading.com/application-security/prompt-injections-loom-large-over-chatgpt-atlas-launch) included these comments:
ChatGPT Atlas is OpenAI's large language model (LLM)-powered Web browser launched Oct. 21 and based on Chromium. Currently available for macOS (with other platforms to come), Atlas comes with native ChatGPT functionality including text generation, Web page summarization, and agent capabilities.
OpenAI advertises the agent as being able to "book appointments, create slideshows, and more, handling complex tasks from start to finish." ChatGPT's agentic capabilities are only available in Plus (for $20 per month) and Pro ($200 per month), though that is a fair bit more accessible than many of the far more premium agents seen earlier this year. And it's not alone. A quick search on Google shows a range of similar agentic browsers and extensions at various price levels.
But here's where things start to get dicey with AI and LLMs. Prompt injections refer to the practice of using a natural language prompt to get an LLM, such as a chatbot, to do something otherwise not intended by the entity responsible for it.
Prompt injections also exist in two forms: direct and indirect. A direct prompt injection, for example, might be to ask a chatbot a question that gets it to divulge sensitive company documentation. An indirect prompt injection is more complex because it involves the attacker inserting a prompt in a situation that does not directly instruct the LLM. This could mean the attacker sends the target an email with a malicious prompt hidden inside the body that an AI assistant reads and follows, or it could mean including a malicious prompt as a hidden element on a Web page that an agent could inadvertently take in as it works.
What do you think?