Quantifying gender bias towards politicians in cross-lingual language models

Research output: Contribution to journalJournal articleResearchpeer-review


  • Fulltext

    Final published version, 1.42 MB, PDF document

Recent research has demonstrated that large pre-trained language models reflect societal biases expressed in natural language. The present paper introduces a simple method for probing language models to conduct a multilingual study of gender bias towards politicians. We quantify the usage of adjectives and verbs generated by language models surrounding the names of politicians as a function of their gender. To this end, we curate a dataset of 250k politicians worldwide, including their names and gender. Our study is conducted in seven languages across six different language modeling architectures. The results demonstrate that pre-trained language models’ stance towards politicians varies strongly across analyzed languages. We find that while some words such as dead, and designated are associated with both male and female politicians, a few specific words such as beautiful and divorced are predominantly associated with female politicians. Finally, and contrary to previous findings, our study suggests that larger language models do not tend to be significantly more gender-biased than smaller ones.

Original languageEnglish
Article numbere0277640
JournalPLoS ONE
Issue number11 November
Pages (from-to)1-24
Publication statusPublished - 2023

Bibliographical note

Publisher Copyright:
© 2023 Stańczak et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

ID: 377801135