Morphological inflection generation with hard monotonic attention

Roee Aharoni, Yoav Goldberg

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

85 Scopus citations

Abstract

We present a neural model for morphological inflection generation which employs a hard attention mechanism, inspired by the nearly-monotonic alignment commonly found between the characters in a word and the characters in its inflection. We evaluate the model on three previously studied morphological inflection generation datasets and show that it provides state of the art results in various setups compared to previous neural and non-neural approaches. Finally we present an analysis of the continuous representations learned by both the hard and soft attention (Bahdanau et al., 2015) models for the task, shedding some light on the features such models extract.

Original languageEnglish
Title of host publicationACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers)
PublisherAssociation for Computational Linguistics (ACL)
Pages2004-2015
Number of pages12
ISBN (Electronic)9781945626753
DOIs
StatePublished - 2017
Event55th Annual Meeting of the Association for Computational Linguistics, ACL 2017 - Vancouver, Canada
Duration: 30 Jul 20174 Aug 2017

Publication series

NameACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers)
Volume1

Conference

Conference55th Annual Meeting of the Association for Computational Linguistics, ACL 2017
Country/TerritoryCanada
CityVancouver
Period30/07/174/08/17

Bibliographical note

Publisher Copyright:
© 2017 Association for Computational Linguistics.

Fingerprint

Dive into the research topics of 'Morphological inflection generation with hard monotonic attention'. Together they form a unique fingerprint.

Cite this