ChatGPT Atlas Browser Can Be Tricked by Fake URLs into Executing Hidden Commands
Briefly

ChatGPT Atlas Browser Can Be Tricked by Fake URLs into Executing Hidden Commands
"The omnibox (combined address/search bar) interprets input either as a URL to navigate to, or as a natural-language command to the agent," NeuralTrust said in a report published Friday. "We've identified a prompt injection technique that disguises malicious instructions to look like a URL, but that Atlas treats as high-trust 'user intent' text, enabling harmful actions." Last week, OpenAI launched Atlas as a web browser with built-in ChatGPT capabilities to assist users with web page summarization, inline text editing, and agentic functions."
"Should an unwitting user place the aforementioned "URL" string in the browser's omnibox, it causes the browser to treat the input as a prompt to the AI agent, since it fails to pass URL validation. This, in turn, causes the agent to execute the embedded instruction and redirect the user to the website mentioned in the prompt instead."
The Atlas browser omnibox treats input as either a navigational URL or a natural-language command to the embedded ChatGPT agent. NeuralTrust found a prompt-injection method that disguises malicious instructions as URL-like strings that fail URL validation, causing the omnibox to pass them to the agent as trusted user intent. A crafted malformed URL can embed explicit instructions and an attacker-controlled destination; when pasted into the omnibox the agent executes embedded commands and redirects users. Attackers could hide such strings behind copy-link buttons to push victims to phishing pages. The vulnerability stems from weak separation between trusted user input and untrusted content.
Read at The Hacker News
Unable to calculate read time
[
|
]