Abstract
Masked language modeling (MLM) is one of the key sub-tasks in vision-language pretraining. In the cross-modal setting, tokens in the sentence are masked at random, and the model predicts the masked tokens given the image and the text. In this paper, we observe several key disadvantages of MLM in this setting. First, as captions tend to be short, in a third of the sentences no token is sampled. Second, the majority of masked tokens are stop-words and punctuation, leading to underutilization of the image. We investigate a range of alternative masking strategies specific to the cross-modal setting that address these shortcomings, aiming for better fusion of text and image in the learned representation. When pretraining the LXMERT model, our alternative masking strategies consistently improve over the original masking strategy on three downstream tasks, especially in low resource settings. Further, our pre-training approach substantially outperforms the baseline model on a prompt-based probing task designed to elicit image objects. These results and our analysis indicate that our method allows for better utilization of the training data.
Original language | English |
---|---|
Title of host publication | Findings of the Association for Computational Linguistics, Findings of ACL |
Subtitle of host publication | EMNLP 2021 |
Editors | Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-Tau Yih |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 3013-3028 |
Number of pages | 16 |
ISBN (Electronic) | 9781955917100 |
State | Published - 2021 |
Externally published | Yes |
Event | 2021 Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021 - Punta Cana, Dominican Republic Duration: 7 Nov 2021 → 11 Nov 2021 |
Publication series
Name | Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021 |
---|
Conference
Conference | 2021 Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021 |
---|---|
Country/Territory | Dominican Republic |
City | Punta Cana |
Period | 7/11/21 → 11/11/21 |
Bibliographical note
Publisher Copyright:© 2021 Association for Computational Linguistics.
Funding
We thank the reviewers for the helpful comments and feedback. We thank Hao Tan for sharing the code and answering questions regarding LXMERT pre-training. We also thank Leshem Choshen, Ro-nen Tamari, Shahaf Finder, and Nitzan Guetta Bit-ton for their valuable feedback. This work was supported in part by the Center for Interdisciplinary Data Science Research at the Hebrew University of Jerusalem, and research gifts from the Allen Institute for AI and Intel Corporation.
Funders | Funder number |
---|---|
Hebrew University of Jerusalem |