Abstract
The textual entailment task - determining if a given text entails a given hypothesis - provides an abstraction of applied semantic inference. This paper describes first a general generative probabilistic setting for textual entailment. We then focus on the sub-task of recognizing whether the lexical concepts present in the hypothesis are entailed from the text. This problem is recast as one of text categorization in which the classes are the vocabulary words. We make novel use of Naïve Bayes to model the problem in an entirely unsupervised fashion. Empirical tests suggest that the method is effective and compares favorably with state-of-the-art heuristic scoring approaches.
Original language | English |
---|---|
Pages | 1050-1055 |
Number of pages | 6 |
State | Published - 2005 |
Event | 20th National Conference on Artificial Intelligence and the 17th Innovative Applications of Artificial Intelligence Conference, AAAI-05/IAAI-05 - Pittsburgh, PA, United States Duration: 9 Jul 2005 → 13 Jul 2005 |
Conference
Conference | 20th National Conference on Artificial Intelligence and the 17th Innovative Applications of Artificial Intelligence Conference, AAAI-05/IAAI-05 |
---|---|
Country/Territory | United States |
City | Pittsburgh, PA |
Period | 9/07/05 → 13/07/05 |