Abstract
Many natural languages assign grammatical gender also to inanimate nouns in the language. In such languages, words that relate to the gender-marked nouns are inflected to agree with the noun's gender. We show that this affects the word representations of inanimate nouns, resulting in nouns with the same gender being closer to each other than nouns with different gender. While "embedding de-biasing" methods fail to remove the effect, we demonstrate that a careful application of methods that neutralize grammatical gender signals from the words' context when training word embeddings is effective in removing it. Fixing the grammatical gender bias yields a positive effect on the quality of the resulting word embeddings, both in monolingual and cross-lingual settings. We note that successfully removing gender signals, while achievable, is not trivial to do and that a language-specific morphological analyzer, together with careful usage of it, are essential for achieving good results.
Original language | English |
---|---|
Title of host publication | CoNLL 2019 - 23rd Conference on Computational Natural Language Learning, Proceedings of the Conference |
Publisher | Association for Computational Linguistics |
Pages | 463-471 |
Number of pages | 9 |
ISBN (Electronic) | 9781950737727 |
State | Published - 2019 |
Event | 23rd Conference on Computational Natural Language Learning, CoNLL 2019 - Hong Kong, China Duration: 3 Nov 2019 → 4 Nov 2019 |
Publication series
Name | CoNLL 2019 - 23rd Conference on Computational Natural Language Learning, Proceedings of the Conference |
---|
Conference
Conference | 23rd Conference on Computational Natural Language Learning, CoNLL 2019 |
---|---|
Country/Territory | China |
City | Hong Kong |
Period | 3/11/19 → 4/11/19 |
Bibliographical note
Publisher Copyright:© 2019 Association for Computational Linguistics.