Microsoft Offers $10,000 Challenge for AI Security Testing
/ 1 min read
Microsoft Launches $10,000 Challenge to Combat AI Prompt Injection Attacks as part of a collaborative effort with the Institute of Science and Technology Australia and ETH Zurich. The LLMail-Inject challenge invites participants to exploit a simulated email client that integrates a large language model (LLM) to execute prompt injection attacks. Competitors will attempt to trick the LLM into executing unintended commands, potentially leaking data or performing malicious actions. The challenge features several built-in defenses against such attacks, including Spotlighting and PromptShield. Teams can register on the official website, with the competition running from December 9 to January 20, offering prizes of $4,000 for first place, $3,000 for second, and additional rewards for subsequent positions.
