Skip to main navigation Skip to search Skip to main content

DisCLIP: Open-Vocabulary Referring Expression Generation

Research output: Contribution to conferencePaperpeer-review

Abstract

Referring Expressions Generation (REG) aims to produce textual descriptions that unambiguously identifies specific objects within a visual scene. Traditionally, this has been achieved through supervised learning methods, which perform well on specific data distributions but often struggle to generalize to new images and concepts. To address this issue, we present a novel approach for REG, named DisCLIP, short for discriminative CLIP. We build on CLIP, a large-scale visual-semantic model, to guide an LLM to generate a contextual description of a target concept in an image while avoiding other distracting concepts. Notably, this optimization happens at inference time and does not require additional training or tuning of learned parameters. We measure the quality of the generated text by evaluating the capability of a receiver model to accurately identify the described object within the scene. To achieve this, we use a frozen zero-shot comprehension module as a critique of our generated referring expressions. We evaluate DisCLIP on multiple referring-expression benchmarks through human evaluation and show that it significantly outperforms previous methods on out-of-domain datasets. Our results highlight the potential of using pre-trained visual-semantic models for generating high-quality contextual descriptions in new visual domains.

Original languageEnglish
StatePublished - 2023
Event34th British Machine Vision Conference, BMVC 2023 - Aberdeen, United Kingdom
Duration: 20 Nov 202324 Nov 2023

Conference

Conference34th British Machine Vision Conference, BMVC 2023
Country/TerritoryUnited Kingdom
CityAberdeen
Period20/11/2324/11/23

Bibliographical note

Publisher Copyright:
© 2023. The copyright of this document resides with its authors.

Fingerprint

Dive into the research topics of 'DisCLIP: Open-Vocabulary Referring Expression Generation'. Together they form a unique fingerprint.

Cite this