Mapping Risks to Policy Mitigations for Enterprise Surveillance
Primary Investigator:
Research Independant
Mark Paes
Abstract
The integration of Artificial Intelligence (AI) into enterprise insider threat monitoring, specifically through User Activity Monitoring (UAM) and User-Entity Behavioral Analytics (UEBA), has significantly enhanced risk detection capabilities but introduced profound ethical and privacy challenges. While current data protection frameworks like the GDPR and CCPA offer robust protections and autonomy for consumers, they fail to provide clarified privacy expectations for trusted insiders subjected to mandatory, constant surveillance. To address this policy gap, this study systematically evaluates the privacy and ethical risks of AI-driven enterprise surveillance by mapping them against the NIST AI Risk Management Framework (RMF) and the MITRE PANOPTIC privacy threat model. Through this threat modeling, the analysis evaluates two primary policy alternatives: the expansion of existing privacy legislation to explicitly address insider privacy concerns, and the establishment of technical compliance controls for AI surveillance. Ultimately, this research advocates for a hybrid approach that leverages a combination of legal, technical, and governance controls. By implementing Privacy-Enhancing Technologies (PETs) like k-anonymity alongside transparent oversight, organizations can successfully balance their fiduciary duty to secure critical assets with the preservation of employee civil liberties and trust.