Sex and gender bias in natural language processing

Davide Cirillo, Alfonso Valencia, Marta Villegas, Hila Gonen, Enrico Santus, Marta R. Costa-Jussà

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

3 Scopus citations

Abstract

Natural language processing (NLP) is increasingly applied to a broad range of sensitive tasks, such as human resources, biomedicine, and healthcare. Accordingly, a growing body of research is investigating the impact of sex and gender bias in the models and the data on which such models are trained. As NLP systems become more pervasive in our societies, the vulnerability to sex and gender bias may cause the perpetuation of prejudice and discriminatory decisions. To address this challenge, a widespread awareness of bias needs to be created in the NLP community and more robust learning algorithms and fair solutions are required for the development and evaluation of NLP methods. In this chapter, we survey the state-of-the-art NLP models and some popular applications to biomedicine and health, with special emphasis on chatbots for mental health. Moreover, we discuss sources and implications of bias in this area and show examples of notable debiasing methods.

Original languageEnglish
Title of host publicationSex and Gender Bias in Technology and Artificial Intelligence
Subtitle of host publicationBiomedicine and Healthcare Applications
PublisherElsevier
Pages113-132
Number of pages20
ISBN (Electronic)9780128213926
ISBN (Print)9780128213933
DOIs
StatePublished - 1 Jan 2022
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2022 Elsevier Inc. All rights reserved.

Keywords

  • Debiasing methods
  • Language models
  • Machine translation
  • Natural language processing
  • Sex and gender bias

Fingerprint

Dive into the research topics of 'Sex and gender bias in natural language processing'. Together they form a unique fingerprint.

Cite this