Training with exploration improves a greedy stack LSTM parser

Miguel Ballesteros, Yoav Goldberg, Chris Dyer, Noah A. Smith

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

40 Scopus citations

Abstract

We adapt the greedy stack LSTM dependency parser of Dyer et al. (2015) to support a training-with-exploration procedure using dynamic oracles (Goldberg and Nivre, 2013) instead of assuming an error-free action history. This form of training, which accounts for model predictions at training time, improves parsing accuracies. We discuss some modifications needed in order to get training with exploration to work well for a probabilistic neural network dependency parser.

Original languageEnglish
Title of host publicationEMNLP 2016 - Conference on Empirical Methods in Natural Language Processing, Proceedings
PublisherAssociation for Computational Linguistics (ACL)
Pages2005-2010
Number of pages6
ISBN (Electronic)9781945626258
DOIs
StatePublished - 2016
Event2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016 - Austin, United States
Duration: 1 Nov 20165 Nov 2016

Publication series

NameEMNLP 2016 - Conference on Empirical Methods in Natural Language Processing, Proceedings

Conference

Conference2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016
Country/TerritoryUnited States
CityAustin
Period1/11/165/11/16

Bibliographical note

Publisher Copyright:
© 2016 Association for Computational Linguistics

Funding

This work was sponsored in part by the U. S. Army Research Laboratory and the U. S. Army Research Office under contract/grant number W911NF-10-1-0533, and in part by NSF CAREER grant IIS-1054319. Miguel Ballesteros was supported by the European Commission under the contract numbers FP7-ICT-610411 (project MULTISENSOR) and H2020-RIA-645012 (project KRISTINA). Yoav Goldberg is supported by the Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI), a Google Research Award and the Israeli Science Foundation (grant number 1555/15).

FundersFunder number
Israeli Science Foundation1555/15
NSF CAREER
U. S. Army Research Laboratory
U. S. Army Research OfficeW911NF-10-1-0533
National Science FoundationIIS-1054319
Google
U.S. Army Aeromedical Research Laboratory
European CommissionFP7-ICT-610411, H2020-RIA-645012
Intel Collaboration Research Institute for Computational Intelligence

    Fingerprint

    Dive into the research topics of 'Training with exploration improves a greedy stack LSTM parser'. Together they form a unique fingerprint.

    Cite this