GAN-inspired Defense Against Backdoor Attack on Federated Learning Systems
Agnideven Palanisamy Sundar, Feng Li, Xukai Zou, and Tianchong Gao
Federated Learning allows clients to privately train their local models, which is then combined in a central server to form the global model. A backdoor attack is a special type of attack where the malicious entities act as clients and implant a small trigger into the global model. Once implanted, the model performs the attacker desired task in the presence of the trigger but acts benignly otherwise. The unavailability of labeled benign and backdoored models has prevented researchers from building detection classifiers. In our work, we build a GAN-inspired defense mechanism that can detect and defend against the presence of such backdoor triggers. We tackle the data problem by utilizing the clients as Generators to construct the required dataset. Unlike traditional GAN, the goal here is to improve the performance of the Discriminator on the server-side, rather than boosting the Generator performance. We experimentally evaluated the proficiency of our approach with the image-based non-IID datasets, CIFAR10 and CelebA.