Abstract
Recent advances in Representation Learning and Adversarial Training seem to succeed in removing unwanted features from the learned representation. We show that demographic information of authors is encoded in-and can be recovered from-the intermediate representations learned by text-based neural classifiers. The implication is that decisions of classifiers trained on textual data are not agnostic to-and likely condition on-demographic attributes. When attempting to remove such demographic information using adversarial training, we find that while the adversarial component achieves chance-level development-set accuracy during training, a post-hoc classifier, trained on the encoded sentences from the first part, still manages to reach substantially higher classification accuracies on the same data. This behavior is consistent across several tasks, demographic properties and datasets. We explore several techniques to improve the effectiveness of the adversarial component. Our main conclusion is a cautionary one: do not rely on the adversarial training to achieve invariant representation to sensitive features.
Original language | English |
---|---|
Title of host publication | Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018 |
Editors | Ellen Riloff, David Chiang, Julia Hockenmaier, Jun'ichi Tsujii |
Publisher | Association for Computational Linguistics |
Pages | 11-21 |
Number of pages | 11 |
ISBN (Electronic) | 9781948087841 |
State | Published - 2018 |
Event | 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018 - Brussels, Belgium Duration: 31 Oct 2018 → 4 Nov 2018 |
Publication series
Name | Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018 |
---|
Conference
Conference | 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018 |
---|---|
Country/Territory | Belgium |
City | Brussels |
Period | 31/10/18 → 4/11/18 |
Bibliographical note
Publisher Copyright:© 2018 Association for Computational Linguistics
Funding
We would like to thank Moni Shahar, Felix Kreuk, Yova Kementchedjhieva and the BIU NLP lab for fruitful conversation and helpful comments. We also thank Su Lin Blodgett for her help in supplying the DIAL dataset and clarifications. This work was supported in part by the The Israeli Science Foundation (grant number 1555/15) and German Research Foundation via the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1).
Funders | Funder number |
---|---|
DIP | DA 1600/1-1 |
Felix Kreuk | |
German-Israeli Project Cooperation | |
Israeli Science Foundation | 1555/15 |
Moni Shahar | |
Yova Kementchedjhieva | |
Deutsche Forschungsgemeinschaft |