Greedy transition-based dependency parsing with stack LSTMS

Miguel Ballesteros, Chris Dyer, Yoav Goldberg, Noah A. Smith

Research output: Contribution to journalArticlepeer-review

12 Scopus citations

Abstract

We introduce a greedy transition-based parser that learns to represent parser states using recurrent neural networks. Our primary innovation that enables us to do this efficiently is a new control structure for sequential neural networks—the stack long short-term memory unit (LSTM). Like the conventional stack data structures used in transition-based parsers, elements can be pushed to or popped from the top of the stack in constant time, but, in addition, an LSTM maintains a continuous space embedding of the stack contents. Our model captures three facets of the parser’s state: (i) unbounded look-ahead into the buffer of incoming words, (ii) the complete history of transition actions taken by the parser, and (iii) the complete contents of the stack of partially built tree fragments, including their internal structures. In addition, we compare two different word representations: (i) standard word vectors based on look-up tables and (ii) character-based models of words. Although standard word embedding models work well in all languages, the character-based models improve the handling of out-of-vocabulary words, particularly in morphologically rich languages. Finally, we discuss the use of dynamic oracles in training the parser. During training, dynamic oracles alternate between sampling parser states from the training data and from the model as it is being learned, making the model more robust to the kinds of errors that will be made at test time. Training our model with dynamic oracles yields a linear-time greedy parser with very competitive performance.

Original languageEnglish
Pages (from-to)311-347
Number of pages37
JournalComputational Linguistics
Volume43
Issue number2
DOIs
StatePublished - 1 Jun 2017

Bibliographical note

Funding Information:
We would like to thank Lingpeng Kong and Jacob Eisenstein for comments on an earlier version of this article and Danqi Chen for assistance with the parsing data sets. We would also like to thank Bernd Bohnet and Joakim Nivre for their help with the parsing algorithms. This work was sponsored in part by the U.S. Army Research Laboratory and the U.S. Army Research Office under contract/grant number W911NF-10-1-0533, and in part by NSF CAREER grant IIS-1054319. Miguel Ballesteros is supported by the European Commission under the contract numbers FP7-ICT-610411 (project MULTISENSOR) and H2020-RIA-645012 (project KRISTINA). Yoav Goldberg is supported by The Israeli Science Foundation (grant number 1555/15) and a Google Research Award.

Publisher Copyright:
© 2017 Association for Computational Linguistics.

Fingerprint

Dive into the research topics of 'Greedy transition-based dependency parsing with stack LSTMS'. Together they form a unique fingerprint.

Cite this