Title:
Mitigating Racial Biases in Toxic Language Detection

Thumbnail Image
Author(s)
Halevy, Matan
Authors
Advisor(s)
Howard, Ayanna M.
Advisor(s)
Editor(s)
Associated Organization(s)
Organizational Unit
Organizational Unit
Supplementary to
Abstract
Recent research has demonstrated how racial biases against users who write African American English exists in popular toxic language datasets. While previous work has focused on a single fairness criteria, we propose to use additional descriptive fairness metrics to better understand the source of these biases. We demonstrate that different benchmark classifiers, as well as two in-process bias-remediation techniques, propagate racial biases even in a larger corpus. We then propose a novel ensemble-framework that uses a specialized classifier that is fine-tuned to the African American English dialect. We show that our proposed framework substantially reduces the racial biases that the model learns from these datasets. We demonstrate how the ensemble framework improves fairness metrics across all sample datasets with minimal impact on the classification performance, and provide empirical evidence to its ability to unlearn the annotation biases towards authors who use African American English. ** Please note that this work may contain examples of offensive words and phrases.
Sponsor
Date Issued
2022-05-05
Extent
Resource Type
Text
Resource Subtype
Thesis
Rights Statement
Rights URI