skip to content
Decrypt LOL

Get Cyber-Smart in Just 5 Minutes a Week

Decrypt delivers quick and insightful updates on cybersecurity. No spam, no data sharing—just the info you need to stay secure.

Read the latest edition
Researcher Proposes Watermarking Method for Deep Neural Networks

Researcher Proposes Watermarking Method for Deep Neural Networks

/ 3 min read

Quick take - Ikuya Morikawa’s paper presents a novel defense mechanism called bounding-box watermarking (BBW) to protect deep neural networks from model extraction attacks by embedding a backdoor into extracted models while maintaining their functionality, demonstrating its effectiveness through experiments on various object detection datasets.

Fast Facts

  • Ikuya Morikawa from Fujitsu Limited addresses vulnerabilities of deep neural networks (DNNs) to model extraction attacks (MEAs) in his research paper.
  • The proposed defense mechanism, called bounding-box watermarking (BBW), embeds a backdoor into extracted models while preserving the functionality of object detection (OD) models.
  • BBW modifies the bounding boxes of detected objects in API responses to stealthily insert the backdoor, achieving 100% accuracy in identifying extracted models across various datasets.
  • The method consists of two phases: a poisoning phase that injects the backdoor and a verification phase that detects it through distorted bounding boxes.
  • The study highlights challenges in existing backdoor attacks and suggests BBW as a robust solution for protecting high-performance DNNs, with future research directions including data-free MEAs and parameter tuning.

Research on Deep Neural Network Vulnerabilities

Ikuya Morikawa, a researcher from Fujitsu Limited in Japan, has authored a paper addressing the vulnerabilities of deep neural networks (DNNs) to model extraction attacks (MEAs). These attacks occur when DNNs, often deployed in the cloud and providing APIs for user interaction, are exploited to duplicate target models.

Proposed Defense Mechanism

To mitigate these risks, the paper proposes a novel defense mechanism known as backdoor-based DNN watermarking. The study primarily focuses on object detection (OD) models. The authors note that existing backdoor attacks are not suitable for watermarking in realistic threat scenarios. The proposed method, termed bounding-box watermarking (BBW), introduces a way to insert a backdoor into extracted models while ensuring the OD models’ functionality remains intact. This is achieved by modifying the bounding boxes (BBs) of detected objects in the API responses to stealthily embed the backdoor.

Experimental Results

Experiments were conducted on three distinct OD datasets: PascalVOC2007, Self-Driving Cars–TrafficSigns, and CityPersons. The results demonstrated that the BBW approach achieved 100% accuracy in identifying extracted models across various scenarios, showcasing its effectiveness as a watermarking strategy. The paper outlines the dual phases of the BBW approach: the poisoning phase and the verification phase. In the poisoning phase, the model alters the BBs of objects containing a predefined trigger to inject the backdoor into any extracted models. The verification phase follows, where querying the suspicious model reveals the presence of the backdoor through distorted BBs on the objects with the trigger.

Challenges and Future Research

The study highlights three core challenges associated with existing backdoor attacks: practicality, stealth, and preservation of functionality. The proposed BBW method effectively addresses these challenges. The paper provides a comprehensive overview of the object detection process, the mechanics of model extraction attacks, and various DNN watermarking techniques. It categorizes existing methods into model poisoning and response poisoning strategies, with the BBW method specifically falling under response poisoning, which modifies API responses to taint the attacker’s data.

A formal problem formulation is presented, detailing the assumptions and threat model that underpins the study. The authors discuss the robustness of the watermark against several countermeasures, such as weight pruning and fine-tuning, indicating that the BBW method not only maintains model functionality but also offers a reliable watermarking mechanism. The findings suggest that the BBW approach is a viable solution for ensuring the protection of high-performance DNNs, which are increasingly viewed as valuable intellectual property. The paper concludes by proposing avenues for future research, including tackling data-free MEAs and refining the tuning of poisoning parameters in relation to model performance and attacker capabilities.

Original Source: Read the Full Article Here

Check out what's latest