skip to content
Decrypt LOL

Get Cyber-Smart in Just 5 Minutes a Week

Decrypt delivers quick and insightful updates on cybersecurity. No spam, no data sharing—just the info you need to stay secure.

Read the latest edition
New Defense Mechanism Proposed Against Model Poisoning in Federated Learning

New Defense Mechanism Proposed Against Model Poisoning in Federated Learning

/ 4 min read

Quick take - Researchers from the University of Electronic Science and Technology of China have developed a new defense mechanism called VERT to protect Federated Learning systems from model poisoning attacks, demonstrating its effectiveness and scalability through extensive experiments.

Fast Facts

  • Researchers from the University of Electronic Science and Technology of China developed a new defense mechanism called VERT to combat model poisoning attacks in Federated Learning (FL) systems.
  • VERT utilizes a vertical solution strategy, predicting user gradients based on historical data and employing a low-dimensional vector projector for computational efficiency.
  • The method has been validated through experiments on datasets like MNIST, CIFAR10, and CIFAR100, demonstrating improved accuracy even when up to 80% of users are compromised.
  • VERT’s computational complexity is significantly lower than existing methods, making it suitable for large-scale FL environments.
  • The implementation code for VERT is publicly available, promoting replication of results and further research in the field.

New Defense Mechanism Against Model Poisoning Attacks in Federated Learning

Researchers from the University of Electronic Science and Technology of China have proposed a new defense mechanism against model poisoning attacks in Federated Learning (FL) systems. The study was conducted by Jinbo Wang, Ruijin Wang, and Fengli Zhang, with Ruijin Wang serving as the corresponding author.

Addressing Model Poisoning Attacks

The paper addresses the increasing concern over model poisoning attacks, which can significantly degrade the performance of global models. This degradation is particularly severe when the proportion of compromised users exceeds 50%. Federated Learning is a distributed learning paradigm that allows multiple users to collaboratively train a global model while preserving the privacy of their local data. The system typically consists of an aggregation server and various users who train models locally. Users send their model gradients to the server, which then aggregates these gradients, generally using the federated average algorithm (FedAvg).

Traditional defenses against model poisoning attacks rely on a horizontal aggregation approach. This approach becomes ineffective when the majority of users are compromised.

Introducing VERT

In response to these vulnerabilities, the authors introduce a new defense method named VERT. VERT employs a vertical solution strategy. It conceptualizes communication rounds as a vertical axis and utilizes historical gradient information to predict user gradients. A predictor trained on historical gradients forecasts the actual user gradients, which are then evaluated for similarity.

To enhance computational efficiency, VERT incorporates a low-dimensional vector projector. This projector reduces the complexity of processing user gradients by projecting them into a more manageable feature space. The effectiveness and scalability of VERT have been validated through extensive experiments conducted on three visual datasets: MNIST, CIFAR10, and CIFAR100. The experiments examined various data distributions, including both IID and non-IID. Multiple model poisoning attack scenarios and different neural network models were also tested.

The results demonstrate that VERT successfully predicts user gradients and selects optimal aggregation gradients, resulting in improved accuracy for the global model. Notably, VERT has been shown to withstand large-scale model poisoning attacks where up to 80% of users may be compromised.

Conclusion and Future Work

The authors assert that honest user gradients tend to be more predictable than those from compromised users. This predictability is a key aspect of their defense mechanism. They outline the optimization objectives necessary for selecting the best aggregation gradients while minimizing the impact of malicious users. The paper also reviews existing defenses against model poisoning attacks, spotlighting the limitations of current horizontal and vertical solutions. Moreover, the computational complexity of VERT is analyzed, revealing that it is significantly lower than that of other existing methods, particularly in large-scale FL environments.

The authors conclude that VERT presents a robust defense against model poisoning attacks in federated learning systems, making it a valuable contribution to the field. The implementation code for VERT is available for public access, facilitating the replication of experimental results and encouraging further research in this area.

For more information, the authors can be contacted via their respective emails: Jinbo Wang ([email protected]), Ruijin Wang ([email protected]), and Fengli Zhang ([email protected]).

Original Source: Read the Full Article Here

Check out what's latest