Discriminatory Biases in Data for Machine Learning and Human Rights

Conference: The IAFOR International Conference on Education in Hawaii (IICE2022)
Title: Discriminatory Biases in Data for Machine Learning and Human Rights
Stream: Education, Sustainability & Society: Social Justice, Development & Political Movements
Presentation Type: Virtual Presentation
Authors:
Euxhenia Hodo, John Jay College of Criminal Justice, United States

Abstract:

The intersectionality between data generated by machine learning/algorithms and human rights may not be obvious at times and accepted as true most of the time. Algorithms are created by people hence they aren’t particularly sensitive to gender, social, racial, moral issues.Typically, human characteristics such as gender, race, socio-economic class determine our potential to achieve outcomes of some performance tasks. This process is problematic because directly sets expectations from a protected attribute. So, how do we then ensure that machine learning datasets are not embedded with racist, sexist and other potential violations of human rights? The objective of this study is to explain how we can create realistic algorithms and accurate datasets while upholding human decency and avoiding disparate treatment and impact. History and political systems may bend human rights disparities over time, machine learning cannot because it is doomed throughout its history with biases. So, where do we go from here? We can formalize a non-discriminatory criteria that optimizes fairness, a system in which a protected human characterizes is not related with some type of expectation for certain categories. Such topic is of great interest and importance because the continuation of wrongfully creating risk assessment algorithms can and will create deeper discrimination gaps and violations of human rights.



Virtual Presentation


Conference Comments & Feedback

Place a comment using your LinkedIn profile

Comments

Share on activity feed

Powered by WP LinkPress


Share this Presentation