Dissertation Abstract:Learning High Precision Lexical Inferences

Vered Shwartz

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

The fundamental goal of natural language processing is to build models capable of human-level understanding of natural language. One of the obstacles to building such models is lexical variability, i.e. the ability to express the same meaning in various ways. Existing text representations excel at capturing relatedness (e.g. blue/red), but they lack the fine-grained distinction of the specific semantic relation between a pair of words. This article is a summary of a Ph.D. dissertation submitted to Bar-Ilan University in 2019, under the supervision of Professor Ido Dagan of the Computer Science Department. The dissertation explored methods for recognizing and extracting semantic relationships between concepts (cat is a type of animal), the constituents of noun compounds (baby oil is oil for babies), and verbal phrases (‘X died at Y’ means the same as ‘X lived until Y’ in certain contexts). The proposed models outperform highly competitive baselines and improve the state-of-the-art in several benchmarks. The dissertation concludes in discussing two challenges in the way of human-level language understanding: developing more accurate text representations and learning to read between the lines.

Original languageEnglish
Pages (from-to)377-383
Number of pages7
JournalKI - Kunstliche Intelligenz
Volume35
Issue number3-4
DOIs
StatePublished - Nov 2021
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2021, Gesellschaft für Informatik e.V. and Springer-Verlag GmbH Germany, part of Springer Nature.

Keywords

  • Computational linguistics
  • Lexical inference
  • Lexical semantics
  • Natural language processing

Fingerprint

Dive into the research topics of 'Dissertation Abstract:Learning High Precision Lexical Inferences'. Together they form a unique fingerprint.

Cite this