Abstract
Distributional representations of words have been recently used in supervised settings for recognizing lexical inference relations between word pairs, such as hypernymy and entailment. We investigate a collection of these state-of-the-art methods, and show that they do not actually learn a relation between two words. Instead, they learn an independent property of a single word in the pair: whether that word is a "prototypical hypernym".
Original language | English |
---|---|
Title of host publication | NAACL HLT 2015 - 2015 Conference of the North American Chapter of the Association for Computational Linguistics |
Subtitle of host publication | Human Language Technologies, Proceedings of the Conference |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 970-976 |
Number of pages | 7 |
ISBN (Electronic) | 9781941643495 |
DOIs | |
State | Published - 2015 |
Event | Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2015 - Denver, United States Duration: 31 May 2015 → 5 Jun 2015 |
Publication series
Name | NAACL HLT 2015 - 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference |
---|
Conference
Conference | Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2015 |
---|---|
Country/Territory | United States |
City | Denver |
Period | 31/05/15 → 5/06/15 |
Bibliographical note
Publisher Copyright:© 2015 Association for Computational Linguistics.