Abstract
Textual entailment is a binary relation between two natural-language texts (called ‘text’ and ‘hypothesis’), where readers of the ‘text’ would agree the ‘hypothesis’ is most likely true (Peter is snoring → A man sleeps). Its recognition requires an account of linguistic variability ( an event may be realized in different ways, e.g. Peter buys the car ↔ The car is purchased by Peter) and of relationships between events (e.g. Peter buys the car → Peter owns the car). Unlike logics-based inference, textual entailment also covers cases of probable but still defeasible entailment (A hurricane hit Peter’s town → Peter’s town was damaged). Since human common-sense reasoning often involves such defeasible inferences, textual entailment is of considerable interest for real-world language processing tasks, as a generic, application-independent framework for semantic inference. This chapter discusses the history of textual entailment, approaches to recognizing it, and its integration in various NLP tasks.
Original language | American English |
---|---|
Title of host publication | The Oxford Handbook of Computational Linguistics 2nd edition |
Editors | Ruslan Mitkov |
Publisher | Oxford University Press Oxford |
Pages | 151-170 |
Number of pages | 20 |
Edition | 2nd |
ISBN (Electronic) | 9780199573691 |
DOIs | |
State | Published - 2016 |