2022 Symposium Posters

Posters > 2022

Unfair AI: It Isn't Just the Data


Primary Investigator:
Chris Clifton

Project Members
Chowdhury Mohammad Rakin Haider, Prof. Chris Clifton
Conventional wisdom holds that discrimination in machine learning is a result of historical discrimination: biased training data leads to biased models. We show that the reality is more nuanced; machine learning can be expected to induce types of bias not found in the training data. In particular, if different groups have different optimal models, and the optimal model for one group has higher accuracy, the optimal accuracy joint model will induce disparate impact even when the training data does not display disparate impact. We argue that due to systemic bias, this is a likely situation, and simply ensuring training data appears unbiased is insufficient to ensure fair machine learning.