Study Identifies Vulnerabilities in Generative AI Security
/ 1 min read
🖼️💻 New study reveals vulnerabilities in generative AI through virus testing. Researchers have demonstrated a novel method for assessing the security of Vision-Large Language Models (VLM/LLM) by embedding the EICAR test file within JPEG images. The study successfully executed tests across multiple platforms, including OpenAI GPT-4o and Google Gemini 1.5 Pro, revealing that modified JPEGs containing the EICAR signature could be uploaded and manipulated within LLM environments. Key findings include the ability to conceal the EICAR string in image metadata, successful extraction of the test file using Python, and the use of various obfuscation techniques. This research enhances Microsoft Research’s framework for evaluating the security of cloud-based generative AI, particularly regarding file handling and execution in containerized settings.
