AUXILIARY LEARNING BY IMPLICIT DIFFERENTIATION

Aviv Navon, Idan Achituve, Haggai Maron, Gal Chechik, Ethan Fetaya

Research output: Contribution to conferencePaperpeer-review

22 Scopus citations

Abstract

Training neural networks with auxiliary tasks is a common practice for improving the performance on a main task of interest. Two main challenges arise in this multi-task learning setting: (i) designing useful auxiliary tasks; and (ii) combining auxiliary tasks into a single coherent loss. Here, we propose a novel framework, AuxiLearn, that targets both challenges based on implicit differentiation. First, when useful auxiliaries are known, we propose learning a network that combines all losses into a single coherent objective function. This network can learn nonlinear interactions between tasks. Second, when no useful auxiliary task is known, we describe how to learn a network that generates a meaningful, novel auxiliary task. We evaluate AuxiLearn in a series of tasks and domains, including image segmentation and learning with attributes in the low data regime, and find that it consistently outperforms competing methods.

Original languageEnglish
StatePublished - 2021
Event9th International Conference on Learning Representations, ICLR 2021 - Virtual, Online
Duration: 3 May 20217 May 2021

Conference

Conference9th International Conference on Learning Representations, ICLR 2021
CityVirtual, Online
Period3/05/217/05/21

Bibliographical note

Publisher Copyright:
© 2021 ICLR 2021 - 9th International Conference on Learning Representations. All rights reserved.

Fingerprint

Dive into the research topics of 'AUXILIARY LEARNING BY IMPLICIT DIFFERENTIATION'. Together they form a unique fingerprint.

Cite this