I2DS: Intelligent Interaction Defense System

Research Areas: Human Centric Security

Principal Investigator: Jennifer Neville

Automatic detection of social engineering attacks in communication channels is challenging because the adversary can make use of a variety of personal, social, and global information available online to make their intent seem benign. In this proposal we aim to exploit recent advances in structured machine learning, deep learning, natural language processing, and social network mining to develop methods to automatically identify social engineering attacks, and incorporate the learned models in a security framework that coordinates bot surveillance with attack detection, automated verification and user feedback. While machine learning models have been deployed in a variety of security-related tasks such as intrusion detection, phishing detection, and malicious URL detection, these efforts have focused on direct application of off-the-shelf methods and less attention was directed to developing structured machine learning methods directly targeting the unique characteristics of attack scenarios. As a result, the available machine learning approaches used are relatively brittle, i.e., they will not generalize well in complex environments with adaptive adversaries. At the same time, the machine learning community is focusing its efforts on narrower (albeit complex) domains, so it is unlikely that new advances in image or language modeling will be directly applicable to social engineering attacks detection. Our key insight is that the primary challenge of detecting social engineering attacks is also an opportunity. The challenge for automated methods is that the attacks involve unstructured textual information with a myriad of false or misleading content derived from a range of individual, social, and global news sources. At the same time however, it is difficult for the adversary to ensure that the misleading information is personalized and consistent with the message intent, while maintaining user trust. This offers an opportunity for machine learning methods. If they can reason effectively about complex patterns involving content, intent, and user relationships, they will be able to detect inconsistencies across the various dimensions of the message/interaction and use them to more accurately detect attacks.

In this project, we will develop a framework to transform the raw data into a shared distributed representation that combines text and social interactions, use this representation to extract semantic information, and characterize interactions over time and social structure. We will use these representations and discovered patterns to design a suite of classification models that are geared towards predicting single message attacks and more sophisticated temporal and social-based attacks. We will also develop a set of methods that can generate additional data to improve the accuracy of our models. This includes novel machine learning methods to augment existing data and generate new examples that avoid detection, as well as methods to generate complex structures with higher-order dependencies using the distributed representations. Finally, to extend the scope of the models, we will develop active techniques that use additional interactions (with either the sender or the recipient) to acquire more knowledge to further improve detection accuracy. This includes cryptographic challenge-response, automatic authentication, and user interaction—to judiciously acquire information to aid the models as well as to promote awareness. If successful, the project will produce machine learning methods for automatic detection of social engineering attacks that are significantly more accurate than previous approaches based on simple textual or relational patterns, or local interactions alone. Moreover, the overall system will combine automated security methods with user feedback and training, which will be more robust to bias and adaptation than systems that rely on automation alone.

Personnel

Other PIs: Dan Goldwasser Ninghui Li

Coming Up!

Our annual security symposium will take place on April 7th and 8th, 2020.
Purdue University, West Lafayette, IN

More Information