skip to content
Decrypt LOL

Get Cyber-Smart in Just 5 Minutes a Week

Decrypt delivers quick and insightful updates on cybersecurity. No spam, no data sharing—just the info you need to stay secure.

Read the latest edition
Study Identifies Vulnerabilities in Google’s Vertex AI Platform

Study Identifies Vulnerabilities in Google’s Vertex AI Platform

/ 4 min read

Quick take - A study by Palo Alto Networks has identified two significant vulnerabilities in Google’s Vertex AI platform that could allow attackers to escalate privileges and exfiltrate sensitive machine learning models, underscoring the need for enhanced security measures in AI model deployment.

Fast Facts

  • Organizations are leveraging AI models trained on sensitive data for competitive advantage, but this poses significant security risks, including potential attacks on compromised models.
  • A study by Palo Alto Networks identified two critical vulnerabilities in Google’s Vertex AI platform: privilege escalation through custom job permissions and model exfiltration via malicious model deployment.
  • Attackers could exploit these vulnerabilities to gain unauthorized access to sensitive data and exfiltrate proprietary machine learning models from Google Cloud Platform (GCP).
  • The research demonstrated that a single deployment of a malicious model could compromise an entire AI environment, emphasizing the need for strict controls on model deployments.
  • To mitigate risks, organizations should separate development from production environments, validate models before deployment, and utilize tools like Prisma Cloud AI Security Posture Management (AI-SPM) for effective oversight.

Organizations Utilizing AI Models Face Security Risks

Organizations are increasingly utilizing artificial intelligence (AI) models trained on sensitive data to gain a competitive advantage. However, this practice presents significant security risks, as compromised AI models could serve as a gateway for attackers.

Vulnerabilities in Google’s Vertex AI Platform

A recent study by researchers from Palo Alto Networks identified two critical vulnerabilities in Google’s Vertex AI platform. These vulnerabilities could allow attackers to escalate privileges and exfiltrate sensitive machine learning models.

The first vulnerability pertains to privilege escalation through custom job permissions within Vertex AI Pipelines. Attackers could exploit these permissions to gain unauthorized access to data services, thereby compromising sensitive information.

The second vulnerability involves model exfiltration through the deployment of a malicious model. A poisoned model could potentially exfiltrate all fine-tuned models within a project, posing a considerable risk to proprietary data.

Risks Associated with Vertex AI

Vertex AI is designed for developing, training, and deploying machine learning and AI models. It features Vertex AI Pipelines for model tuning. However, the research indicated that while custom jobs in Vertex AI Pipelines can enhance model capabilities, they also present opportunities for exploitation.

The privilege escalation vulnerability enables access to resources beyond the intended scope by manipulating custom job pipelines. Service agents within Vertex AI possess extensive permissions that can be exploited to access sensitive data. Researchers demonstrated that attackers could inject custom code into the container specification JSON configuration and create a backdoor image to gain access to the system.

This research illustrated a detailed attack flow. Deploying a malicious model could allow access to restricted Google Cloud Platform (GCP) repositories and facilitate the exfiltration of proprietary AI models. Specifically, the process of creating a malicious model involved modifying an apparently innocuous model and deploying it to the model registry as part of the attack strategy.

Recommendations for Mitigating Risks

The vulnerabilities highlighted the ability to access and export model images and fine-tuned adapter layers from GCP, which are particularly valuable targets for attackers due to the sensitive proprietary data they contain. The study concluded that a single deployment of a malicious model could compromise an entire AI environment.

To mitigate these risks, the researchers emphasized the necessity of implementing strict controls on model deployments. Ensuring that organizations separate development or test environments from live production environments is crucial. Validating every model prior to deployment is essential for maintaining security. Furthermore, the importance of Prisma Cloud AI Security Posture Management (AI-SPM) was underscored for overseeing AI pipelines effectively.

Palo Alto Networks has shared its findings with members of the Cyber Threat Alliance (CTA) to enhance collective protections against such cyber threats. The findings highlight the critical need for vigilance and robust security measures in the deployment of AI models.

Original Source: Read the Full Article Here

Check out what's latest