Hanshen Xiao - Purdue University
Students: Fall 2025, unless noted otherwise, sessions will be virtual on Zoom.
Join us live on Zoom >
( Register to receive a reminder )
Wednesday, Nov 05, 2025 04:30pm - 05:30pm ET
( Register to receive a reminder )
Wednesday, Nov 05, 2025 04:30pm - 05:30pm ET
When is Automatic Privacy Proof Possible for Black-Box Processing?
Nov 05, 2025
Abstract
Can we automatically and provably quantify and control the information leakage from a black-box processing? From a statistical inference standpoint, in this talk, I will start from a unified framework to summarize existing privacy definitions based on input-independent indistinguishability and unravel the fundamental challenges in crafting privacy proof for general data processing. Yet, the landscape shifts when we gain access to the (still possibly black-box) secret generation. By carefully leveraging its entropy, we unlock the black-box analysis. This breakthrough enables us to automatically "learn" the underlying inference hardness for an adversary to recover arbitrarily-selected sensitive features fully through end-to-end simulations without any algorithmic restrictions. Meanwhile, a set of new information-theoretical tools will be introduced to efficiently minimize additional noise perturbation assisted with sharpened adversarially adaptive composition. I will also unveil the win-win situation between the privacy and stability for simultaneous algorithm improvements. Concrete applications will be given in diverse domains, including privacy-preserving machine learning on image classification and large language models, side-channel leakage mitigation and formalizing long-standing heuristic data obfuscations.About the Speaker

Ways to Watch

