Abstract
Natural language processing (NLP) is increasingly applied to a broad range of sensitive tasks, such as human resources, biomedicine, and healthcare. Accordingly, a growing body of research is investigating the impact of sex and gender bias in the models and the data on which such models are trained. As NLP systems become more pervasive in our societies, the vulnerability to sex and gender bias may cause the perpetuation of prejudice and discriminatory decisions. To address this challenge, a widespread awareness of bias needs to be created in the NLP community and more robust learning algorithms and fair solutions are required for the development and evaluation of NLP methods. In this chapter, we survey the state-of-the-art NLP models and some popular applications to biomedicine and health, with special emphasis on chatbots for mental health. Moreover, we discuss sources and implications of bias in this area and show examples of notable debiasing methods.
Original language | English |
---|---|
Title of host publication | Sex and Gender Bias in Technology and Artificial Intelligence |
Subtitle of host publication | Biomedicine and Healthcare Applications |
Publisher | Elsevier |
Pages | 113-132 |
Number of pages | 20 |
ISBN (Electronic) | 9780128213926 |
ISBN (Print) | 9780128213933 |
DOIs | |
State | Published - 1 Jan 2022 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2022 Elsevier Inc. All rights reserved.
Keywords
- Debiasing methods
- Language models
- Machine translation
- Natural language processing
- Sex and gender bias