Michael Clark - Riverside Research
From Machine Learning Threats to Machine Learning Protection Requirements
Oct 07, 2020Download: MP4 Video Size: 393.4MB
Watch on YouTube
Researchers from academia and industry have identifiedinteresting threat vectors against machine learning systems. These threatsexploit intrinsic vulnerabilities in the system, or vulnerabilities that arisenaturally from how the system works rather than being the result of a specificimplementation flaw. In this talk, I present recent results in threats tomachine learning systems from academia and industry, including some of our ownresearch at Riverside Research. Knowing about these threats is only half thebattle, however. We must determine how to transition both the understandinggained by developing attacks and specific defenses into practice to ensure thesecurity of fielded systems. In this talk I leverage my experience working onstandards committees to present an approach for leveraging machine learningprotection requirements on systems that use machine learning.
About the Speaker
Dr. Mike Clark is a computer scientist at Riverside Researchand currently leads their Trusted and Resilient Systems research group. Heconducts research in the areas of security of distributed and cyber-physicalsystems, cryptographic secure computation, and security and privacy issues ofmachine learning and artificial intelligence. Dr. Clark also co-leads thecybersecurity subcommittee for the Sensor Open Systems Architecture (SOSA™)consortium, where he is developing security requirements and standards forsensor systems of the future.