Machine Learning Hardware Vulnerabilities Exposed by New Research
/ 1 min read
🎭 New research reveals vulnerabilities in machine learning hardware through backdoor attacks. A study introduces a novel backdoor attack that exploits hardware accelerators used in machine learning, challenging the assumption that these systems are secure. The attack operates without altering the learning model or software, making it undetectable by existing defenses. Researchers developed a minimal backdoor that modifies only 30 parameters of a traffic-sign recognition model, while a configurable hardware trojan activates the backdoor only for specific models. This method expands the accelerator’s hardware by just 0.24% and incurs no runtime overhead, highlighting significant security risks in machine learning systems and emphasizing the need for trusted manufacturing environments.
