Abstract
In all previous work on deep multi-task learning we are aware of, all task supervisions are on the same (outermost) layer. We present a multi-task learning architecture with deep bi-directional RNNs, where different tasks supervision can happen at different layers. We present experiments in syntactic chunking and CCG supertagging, coupled with the additional task of POS-tagging. We show that it is consistently better to have POS supervision at the innermost rather than the outermost layer. We argue that this is because "lowlevel" tasks are better kept at the lower layers, enabling the higher-level tasks to make use of the shared representation of the lower-level tasks. Finally, we also show how this architecture can be used for domain adaptation.
Original language | English |
---|---|
Title of host publication | 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016 - Short Papers |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 231-235 |
Number of pages | 5 |
ISBN (Electronic) | 9781510827592 |
DOIs | |
State | Published - 2016 |
Event | 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016 - Berlin, Germany Duration: 7 Aug 2016 → 12 Aug 2016 |
Publication series
Name | 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016 - Short Papers |
---|
Conference
Conference | 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016 |
---|---|
Country/Territory | Germany |
City | Berlin |
Period | 7/08/16 → 12/08/16 |
Bibliographical note
Publisher Copyright:© 2016 Association for Computational Linguistics.