2019 Symposium Posters

Posters > 2019

Poisoning Attacks agains Anomaly Detection Techniques


PDF

Primary Investigator:
Chris Clifton

Project Members
Radhika Bhargava & Chris Clifton
Abstract
Considerable attention has been given to the vulnerability of machine learning to adversarial samples. This is particularly critical in anomaly detection; uses such as detecting fraud, intrusion, and malware must assume a malicious adversary. We specifically address poisoning attacks, where the adversary injects carefully crafted benign samples into the data, causing concept drift that causes the anomaly detection to misclassify the actual attack as benign. Our goal is to predict the vulnerability of an anomaly detection method to an unknown attack, in particular what is the expected minimum number of poison samples the adversary would need to succeed. Such an estimate is a necessary step in risk analysis: do we expect the anomaly detection to be sufficiently robust to be useful in the face of attacks?We analyze one-class SVM as an anomaly detection method, and derive estimates for robustness to poisoning attacks. The analytical estimates are validated against the number of poison samples needed for the actual anomalies in standard anomaly detection test datasets.