Learning to Embed Semantic Similarity for Joint Image-Text Retrieval

Noam Malali, Yosi Keller

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

We present a deep learning approach for learning the joint semantic embeddings of images and captions in a euclidean space, such that the semantic similarity is approximated by the L2}L2 distances in the embedding space. For that, we introduce a metric learning scheme that utilizes multitask learning to learn the embedding of identical semantic concepts using a center loss. By introducing a differentiable quantization scheme into the end-to-end trainable network, we derive a semantic embedding of semantically similar concepts in euclidean space. We also propose a novel metric learning formulation using an adaptive margin hinge loss, that is refined during the training phase. The proposed scheme was applied to the MS-COCO, Flicke30K and Flickr8K datasets, and was shown to compare favorably with contemporary state-of-the-art approaches.

Original languageEnglish
Pages (from-to)10252-10260
Number of pages9
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume44
Issue number12
DOIs
StatePublished - 1 Dec 2022

Bibliographical note

Publisher Copyright:
© 1979-2012 IEEE.

Keywords

  • Text and image fusion
  • deep learning
  • joint embedding

Fingerprint

Dive into the research topics of 'Learning to Embed Semantic Similarity for Joint Image-Text Retrieval'. Together they form a unique fingerprint.

Cite this