Abstract
The notion of “in-domain data” in NLP is often over-simplistic and vague, as textual data varies in many nuanced linguistic aspects such as topic, style or level of formality. In addition, domain labels are many times unavailable, making it challenging to build domain-specific systems. We show that massive pretrained language models implicitly learn sentence representations that cluster by domains without supervision - suggesting a simple data-driven definition of domains in textual data. We harness this property and propose domain data selection methods based on such models, which require only a small set of in-domain monolingual data. We evaluate our data selection methods for neural machine translation across five diverse domains, where they outperform an established approach as measured by both BLEU and by precision and recall of sentence selection with respect to an oracle.
Original language | English |
---|---|
Title of host publication | ACL 2020 - 58th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 7747-7763 |
Number of pages | 17 |
ISBN (Electronic) | 9781952148255 |
DOIs | |
State | Published - 2020 |
Event | 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020 - Virtual, Online, United States Duration: 5 Jul 2020 → 10 Jul 2020 |
Publication series
Name | Proceedings of the Annual Meeting of the Association for Computational Linguistics |
---|---|
ISSN (Print) | 0736-587X |
Conference
Conference | 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020 |
---|---|
Country/Territory | United States |
City | Virtual, Online |
Period | 5/07/20 → 10/07/20 |
Bibliographical note
Publisher Copyright:© 2020 Association for Computational Linguistics
Funding
We thank Wei Wang for early discussions on domain adaptation and data selection that inspired this work during Roee's internship in Google Translate.