Crowdsourcing inference-rule evaluation

Naomi Zeichner, Jonathan Berant, Ido Dagan

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

30 Scopus citations

Abstract

The importance of inference rules to semantic applications has long been recognized and extensive work has been carried out to automatically acquire inference-rule resources. However, evaluating such resources has turned out to be a non-trivial task, slowing progress in the field. In this paper, we suggest a framework for evaluating inference-rule resources. Our framework simplifies a previously proposed "instance-based evaluation" method that involved substantial annotator training, making it suitable for crowdsourcing. We show that our method produces a large amount of annotations with high inter-annotator agreement for a low cost at a short period of time, without requiring training expert annotators.

Original languageEnglish
Title of host publication50th Annual Meeting of the Association for Computational Linguistics, ACL 2012 - Proceedings of the Conference
Pages156-160
Number of pages5
StatePublished - 2012
Event50th Annual Meeting of the Association for Computational Linguistics, ACL 2012 - Jeju Island, Korea, Republic of
Duration: 8 Jul 201214 Jul 2012

Publication series

Name50th Annual Meeting of the Association for Computational Linguistics, ACL 2012 - Proceedings of the Conference
Volume2

Conference

Conference50th Annual Meeting of the Association for Computational Linguistics, ACL 2012
Country/TerritoryKorea, Republic of
CityJeju Island
Period8/07/1214/07/12

Fingerprint

Dive into the research topics of 'Crowdsourcing inference-rule evaluation'. Together they form a unique fingerprint.

Cite this