2024 Symposium Posters

Posters > 2024

Securing Deep Neural Networks on Edge from Membership Inference Attacks Using Trusted Execution Environments


PDF

Primary Investigator:
Yung-Hsiang Lu

Project Members
Cheng-yun Yang
Abstract
Privacy concerns arise from malicious attacks on Deep Neural Network (DNN) applications during sensitive data inference on edge devices. Membership Inference Attack (MIA) is developed by adversaries to determine whether sensitive data is used to train the DNN applications. Prior work uses Trusted Execution Environments (TEEs) to hide DNN model inference from adversaries on edge devices. Unfortunately, existing methods have two major problems. First, due to the restricted memory of TEEs, prior work cannot secure large-size DNNs from gradient-based MIAs. Second, prior work is ineffective on output-based MIAs. To mitigate the problems, we present a depth-wise layer partitioning method to run large sensitive layers inside TEEs. We further propose a model quantization strategy to improve the defense capability of DNNs against output-based MIAs and accelerate the computation. We also automate the process of securing PyTorch-based DNN models inside TEEs. Experiments on Raspberry Pi 3B+ show that our method can reduce the accuracy of gradient-based MIAs on AlexNet, VGG-16, and ResNet-20 evaluated on the CIFAR-100 dataset by 17.9%, 11%, and 35.3%. The accuracy of output-based MIAs on the three models is also reduced by 18.5%, 13.4%, and 29.6%, respectively.