Interpretable Privacy Preservation of Text Representations Using Vector Steganography
Julia (Taylor) Rayz
Geetanjali Bihani, Julia Taylor Rayz
Contextual word representations generated by language models (LMs) learn spurious associations present in the training corpora. Recent findings reveal that adversaries can exploit these associations to reverse-engineer the private attributes of entities mentioned within the corpora. These findings have led to efforts towards minimizing the privacy risks of language models. However, existing approaches lack interpretability, compromise on data utility, and fail to provide privacy guarantees. Thus, the goal of this research is to develop interpretable approaches towards privacy preservation of text representations that retain data utility while guaranteeing privacy. To this end, this work proposes a vector space steganography method to obfuscate underlying spurious associations in text representations and preserve the distributional semantic properties learned during training.