Abstract
When describing images with natural language, descriptions can be made more informative if tuned for downstream tasks. This can be achieved by training two networks: A 'speaker' that generates sentences given an image and a 'listener' that uses them to perform a task. Unfortunately, training multiple networks jointly to communicate, faces two major challenges. First, the descriptions generated by a speaker network are discrete and stochastic, making optimization very hard and inefficient. Second, joint training usually causes the vocabulary used during communication to drift and diverge from natural language. To address these challenges, we present an effective optimization technique based on partial-sampling from a multinomial distribution combined with straight-through gradient updates, which we name PSST for Partial-Sampling Straight-Through. We then show that the generated descriptions can be kept close to natural by constraining them to be similar to human descriptions. Together, this approach creates descriptions that are both more discriminative and more natural than previous approaches. Evaluations on the COCO benchmark show that PSST improve the recall@10 from 60% to 86% maintaining comparable language naturalness. Human evaluations show that it also increases naturalness while keeping the discriminative power of generated captions.
Original language | English |
---|---|
Title of host publication | Proceedings - 2019 International Conference on Computer Vision, ICCV 2019 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 8897-8906 |
Number of pages | 10 |
ISBN (Electronic) | 9781728148038 |
DOIs | |
State | Published - Oct 2019 |
Event | 17th IEEE/CVF International Conference on Computer Vision, ICCV 2019 - Seoul, Korea, Republic of Duration: 27 Oct 2019 → 2 Nov 2019 |
Publication series
Name | Proceedings of the IEEE International Conference on Computer Vision |
---|---|
ISSN (Print) | 1550-5499 |
Conference
Conference | 17th IEEE/CVF International Conference on Computer Vision, ICCV 2019 |
---|---|
Country/Territory | Korea, Republic of |
City | Seoul |
Period | 27/10/19 → 2/11/19 |
Bibliographical note
Publisher Copyright:© 2019 IEEE.
Funding
Acknowledgement: We thank G. Shakhnarovich and Y. Goldberg for insightful discussions. This work was supported by an Israel Science Foundation grant 737/18.
Funders | Funder number |
---|---|
Israel Science Foundation | 737/18 |