The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

FAI: Identifying, measuring, and mitigating fairness issues in AI

Principal Investigator: Chris Clifton

Bias and Discrimination in Artificial Intelligence (AI) has been receiving increasing attention. Unfortunately, the positive concept Fair AI is difficult to define. For example, it is hard to distinguish between (desired) personalization and (undesired) bias. These differences often depend on context, such as the use of gender or ethnicity in making a medical diagnosis vs. using the same attributes in determining if insurance should cover a medical procedure. This is particularly difficult as AI systems are used in new contexts, enabling products and services that have not been seen before and for which societal concepts of fairness are not yet established. This multidisciplinary project will construct a framework and taxonomy for understanding fairness in societal contexts. Human-computer interaction methods will be developed to learn perceptions of fairness based on human interaction with AI systems. Automated methods will be developed to relate these perceptions to the framework, enabling developers (and eventually automated AI systems) to respond to and correct issues perceived by users of the systems.

This exploratory project will develop a taxonomy incorporating concepts of Aristotelian fairness (distributive vs. corrective justice) and Rawlsian fairness (equality of rights and opportunities). A formal literature survey will be used to establish a framework for societal contexts of fairness and how they relate to the Taxonomy. Experiments with perceptions of models both in isolation and in comparison will be used to evaluate situations where people perceive AI systems as fair or unfair. Tools will be developed to identify and explain fairness issues in terms of the taxonomy, based on the elicited perceptions and societal context of the system. While beyond the scope of this project, the outcome of these tools could potentially be used to automatically adjust AI systems to reduce unfairness.

Personnel

Other PIs: Chris Yeomans Lindsay Weinberg Murat Kantarcioglu (University of Texas at Dallas) Blase Ur (University of Chicago)

Students: Rakin Haider Ryan Van Nood

Keywords: Bias, Fairness, machine learning