2023 Symposium Posters

Posters > 2023

Securing Data Privacy of Machine-learning Models on Edge Devices using Trusted Execution Environment


PDF

Primary Investigator:
Yung-Hsiang Lu

Project Members
Gowri Ramshankar, Cheng-Yun Yang, Yung-Hsiang Lu
Abstract
Machine learning models are under high privacy risk when a large amount of sensitive data is used for training. For example, some business organizations apply machine learning models to analyze the preference of customers based on their private information or past purchase records. Membership inference attacks (MIAs) are designed to attack such machine learning models. They take the predictions or gradients as input to determine if a specific data is part of the model’s training set. If a machine learning model is not well protected during inference, it will result in a private data leakage under MIAs. Differential privacy and encryption are two common ways to protect model from MIAs. However, differential privacy comes with accuracy drop and encryption significantly increases the computational overhead. For edge devices that only have constrained resources (power, memory, computing), we consider Trusted Execution Environment (TEE) as a better choice to secure data privacy. In order to mitgate the challenges provided by the resource constarined nature of a TEE and the limited models and frameworks that are available to build a neural network on a TEE, we propose a novel approach to split the inference of a model between a General Purpose OS and a TEE. Our hypothesis is that this allows for developers to still build and train their models using popular Python based frameworks like Pytorch and also use the TEE for protecting the data.