No Word Embedding Model Is Perfect

Evaluating the Representation Accuracy for Social Bias in the Media

authored by
Maximilian Spliethöver, Maximilian Keiff, Henning Wachsmuth
Abstract

News articles both shape and reflect public opinion across the political spectrum. Analyzing them for social bias can thus provide valuable insights, such as prevailing stereotypes in society and the media, which are often adopted by NLP models trained on respective data. Recent work has relied on word embedding bias measures, such as WEAT. However, several representation issues of embeddings can harm the measures’ accuracy, including low-resource settings and token frequency differences. In this work, we study what kind of embedding algorithm serves best to accurately measure types of social bias known to exist in US online news articles. To cover the whole spectrum of political bias in the US, we collect 500k articles and review psychology literature with respect to expected social bias. We then quantify social bias using WEAT along with embedding algorithms that account for the aforementioned issues. We compare how models trained with the algorithms on news articles represent the expected social bias. Our results suggest that the standard way to quantify bias does not align well with knowledge from psychology. While the proposed algorithms reduce the gap, they still do not fully match the literature.

Organisation(s)
Natural Language Processing Section
Institute of Artificial Intelligence
External Organisation(s)
Universität Hamburg
Type
Conference contribution
Pages
2081-2093
No. of pages
13
Publication date
12.2022
Publication status
Published
Peer reviewed
Yes
ASJC Scopus subject areas
Computational Theory and Mathematics, Computer Science Applications, Information Systems
Sustainable Development Goals
SDG 10 - Reduced Inequalities
Electronic version(s)
https://doi.org/10.18653/v1/2022.findings-emnlp.152 (Access: Open)