
"A critical-severity bug in Docker's Ask Gordon AI assistant can be exploited to compromise Docker environments, cybersecurity firm Noma Security warns. Named DockerDash, the bug exists in the MCP Gateway's contextual trust, where malicious instructions injected into a Docker image's metadata labels are forwarded to the MCP and executed without validation. "In modern AI architectures, the Model Context Protocol (MCP) acts as a bridge between the LLM and the local environment (files, Docker containers, databases). MCPs provide the 'context' AI needs to answer questions,""
""Gordon AI reads and interprets the malicious instruction, forwards it to the MCP Gateway, which then executes it through MCP tools. Every stage happens with zero validation, taking advantage of current agents and MCP Gateway architecture," Noma says. The cybersecurity firm has named the technique 'meta-context injection' and explains that it allows an attacker to hijack an AI's reasoning process."
A critical vulnerability in Docker's Ask Gordon AI assistant enables malicious instructions hidden in Docker image metadata labels to be forwarded to and executed by the Model Context Protocol (MCP) Gateway without validation. The MCP supplies contextual information from local environments to AI agents, and the gateway does not distinguish informational metadata from executable internal instructions. Attackers can embed commands in image metadata that the assistant forwards to MCP tools, resulting in execution with zero validation. The technique, called meta-context injection, can produce remote code execution in cloud/CLI deployments and data exfiltration on desktop implementations.
Read at SecurityWeek
Unable to calculate read time
Collection
[
|
...
]