Hanshen Xiao - Purdue University
Students: Spring 2026, unless noted otherwise, sessions will be virtual on Zoom.
When is Automatic Privacy Proof Possible for Black-Box Processing?
Nov 05, 2025
Download:
Watch on YouTube
Abstract
Can we automatically and provably quantify and control the information leakage from a black-box processing? From a statistical inference standpoint, in this talk, I will start from a unified framework to summarize existing privacy definitions based on input-independent indistinguishability and unravel the fundamental challenges in crafting privacy proof for general data processing. Yet, the landscape shifts when we gain access to the (still possibly black-box) secret generation. By carefully leveraging its entropy, we unlock the black-box analysis. This breakthrough enables us to automatically "learn" the underlying inference hardness for an adversary to recover arbitrarily-selected sensitive features fully through end-to-end simulations without any algorithmic restrictions. Meanwhile, a set of new information-theoretical tools will be introduced to efficiently minimize additional noise perturbation assisted with sharpened adversarially adaptive composition. I will also unveil the win-win situation between the privacy and stability for simultaneous algorithm improvements. Concrete applications will be given in diverse domains, including privacy-preserving machine learning on image classification and large language models, side-channel leakage mitigation and formalizing long-standing heuristic data obfuscations.About the Speaker

Ways to Watch
Watch Now!
Over 500 videos of our weekly seminar and symposia keynotes are available on our YouTube Channel. Also check out Spaf's YouTube Channel. Subscribe today!- Upcoming
- Past Seminars
- Previous Speakers
- Open Dates (Fall/Spring)
- Attending the Seminar
- About the Weekly Seminar
- CPE Credit Information (PDF)
- Join our Mailing List

