Detecting Bias in Natural Language Texts

Research Areas: Human Centric Security

Principal Investigator: Julia (Taylor) Rayz

Bias detection has becoming an increasingly popular area within natural language processing. The general goal of bias detection is not only to identify that it exists, and possibly flag it, but, more importantly, to reduce its impact on models that are learned from data that contain it. Examples of bias within natural language processing includes gender and ethnic bias that can be traced through longitudinal data. However, biased information is also present in reporting various perspectives of events, social or political, as well as in what is commonly known as propaganda, with the latter heavily overlapping with psychological warfare and false information.

Personnel

Students: Geetanjali Bihani

Coming Up!

Our annual security symposium will take place on April 7th and 8th, 2020.
Purdue University, West Lafayette, IN

More Information