Abstract
Prepositions are very common and very ambiguous, and understanding their sense is critical for understanding the meaning of the sentence. Supervised corpora for the preposition-sense disambiguation task are small, suggesting a semi-supervised approach to the task. We show that signals from unannotated multilingual data can be used to improve supervised preposition-sense disambiguation. Our approach pre-trains an LSTM encoder for predicting the translation of a preposition, and then incorporates the pre-trained encoder as a component in a supervised classification system, and fine-tunes it for the task. The multilingual signals consistently improve results on two preposition-sense datasets.
Original language | English |
---|---|
Title of host publication | COLING 2016 - 26th International Conference on Computational Linguistics, Proceedings of COLING 2016 |
Subtitle of host publication | Technical Papers |
Publisher | Association for Computational Linguistics, ACL Anthology |
Pages | 2718-2729 |
Number of pages | 12 |
ISBN (Print) | 9784879747020 |
State | Published - 2016 |
Event | 26th International Conference on Computational Linguistics, COLING 2016 - Osaka, Japan Duration: 11 Dec 2016 → 16 Dec 2016 |
Publication series
Name | COLING 2016 - 26th International Conference on Computational Linguistics, Proceedings of COLING 2016: Technical Papers |
---|
Conference
Conference | 26th International Conference on Computational Linguistics, COLING 2016 |
---|---|
Country/Territory | Japan |
City | Osaka |
Period | 11/12/16 → 16/12/16 |
Bibliographical note
Publisher Copyright:© 1963-2018 ACL.
Funding
The work is supported by The Israeli Science Foundation (grant number 1555/15).
Funders | Funder number |
---|---|
Israeli Science Foundation | 1555/15 |