Abstract
We present a neural model for morphological inflection generation which employs a hard attention mechanism, inspired by the nearly-monotonic alignment commonly found between the characters in a word and the characters in its inflection. We evaluate the model on three previously studied morphological inflection generation datasets and show that it provides state of the art results in various setups compared to previous neural and non-neural approaches. Finally we present an analysis of the continuous representations learned by both the hard and soft attention (Bahdanau et al., 2015) models for the task, shedding some light on the features such models extract.
Original language | English |
---|---|
Title of host publication | ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers) |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 2004-2015 |
Number of pages | 12 |
ISBN (Electronic) | 9781945626753 |
DOIs | |
State | Published - 2017 |
Event | 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017 - Vancouver, Canada Duration: 30 Jul 2017 → 4 Aug 2017 |
Publication series
Name | ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers) |
---|---|
Volume | 1 |
Conference
Conference | 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017 |
---|---|
Country/Territory | Canada |
City | Vancouver |
Period | 30/07/17 → 4/08/17 |
Bibliographical note
Publisher Copyright:© 2017 Association for Computational Linguistics.
Funding
This work was supported by the Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI), and The Israeli Science Foundation (grant number 1555/15).
Funders | Funder number |
---|---|
Israeli Science Foundation | 1555/15 |
National Intelligence Service | |
Israel Science Foundation | |
Intel Collaboration Research Institute for Computational Intelligence |