GRID Defense Mechanism Protects GNN Models from Attacks
/ 1 min read
🔗✨ New defense mechanism GRID protects GNN models from link stealing attacks. Researchers have introduced a novel solution called Graph Link Disguise (GRID) to safeguard graph neural networks (GNNs) from link stealing attacks, which threaten the security and privacy of training graphs. GRID works by adding carefully crafted noise to the prediction vectors of nodes, disguising adjacent nodes as non-adjacent ones while maintaining prediction accuracy. The method focuses on a subset of core nodes to minimize distortion and computational costs. Extensive experiments demonstrate GRID’s effectiveness against various link-stealing attacks, achieving a superior privacy-utility balance compared to existing defenses. This advancement could significantly enhance the security of GNN applications in sensitive domains.
